Jan 28 18:33:52 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 18:33:52 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:52 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:53 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:33:54 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 18:33:55 crc kubenswrapper[4721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:33:55 crc kubenswrapper[4721]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 18:33:55 crc kubenswrapper[4721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:33:55 crc kubenswrapper[4721]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:33:55 crc kubenswrapper[4721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 18:33:55 crc kubenswrapper[4721]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.127118 4721 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130542 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130559 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130563 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130567 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130571 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130575 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130579 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130583 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130587 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130591 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130595 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130600 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130604 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130607 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130611 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130615 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130619 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130624 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130629 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130633 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130636 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130648 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130653 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130657 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130661 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130664 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130667 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130671 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130675 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130679 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130682 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130687 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130691 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130695 4721 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130698 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130702 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130705 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130709 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130712 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130716 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130720 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130723 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130727 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130731 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130734 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130738 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130741 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130745 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130749 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130752 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130756 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130759 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130763 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130802 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130807 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130811 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130815 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130819 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130823 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130827 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130830 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130834 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130838 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130841 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130845 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130848 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130853 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130857 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130860 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130864 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.130869 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133826 4721 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133848 4721 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133856 4721 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133862 4721 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133869 4721 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133874 4721 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133881 4721 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133886 4721 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133891 4721 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133895 4721 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133900 4721 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133904 4721 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133909 4721 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133913 4721 flags.go:64] FLAG: --cgroup-root="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133917 4721 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133921 4721 flags.go:64] FLAG: --client-ca-file="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133926 4721 flags.go:64] FLAG: --cloud-config="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133930 4721 flags.go:64] FLAG: --cloud-provider="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133934 4721 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133941 4721 flags.go:64] FLAG: --cluster-domain="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133945 4721 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133950 4721 flags.go:64] FLAG: --config-dir="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133955 4721 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133962 4721 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133970 4721 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133976 4721 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133982 4721 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133987 4721 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133992 4721 flags.go:64] FLAG: --contention-profiling="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.133997 4721 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134001 4721 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134005 4721 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134010 4721 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134015 4721 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134019 4721 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134025 4721 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134029 4721 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134033 4721 flags.go:64] FLAG: --enable-server="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134038 4721 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134043 4721 flags.go:64] FLAG: --event-burst="100" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134048 4721 flags.go:64] FLAG: --event-qps="50" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134052 4721 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134056 4721 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134060 4721 flags.go:64] FLAG: --eviction-hard="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134067 4721 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134071 4721 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134075 4721 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134080 4721 flags.go:64] FLAG: --eviction-soft="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134084 4721 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134088 4721 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134092 4721 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134096 4721 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134100 4721 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134104 4721 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134108 4721 flags.go:64] FLAG: --feature-gates="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134114 4721 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134118 4721 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134122 4721 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134131 4721 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134135 4721 flags.go:64] FLAG: --healthz-port="10248" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134140 4721 flags.go:64] FLAG: --help="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134145 4721 flags.go:64] FLAG: --hostname-override="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134149 4721 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134153 4721 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134157 4721 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134161 4721 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134180 4721 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134185 4721 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134189 4721 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134192 4721 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134196 4721 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134201 4721 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134205 4721 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134209 4721 flags.go:64] FLAG: --kube-reserved="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134213 4721 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134217 4721 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134222 4721 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134226 4721 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134230 4721 flags.go:64] FLAG: --lock-file="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134234 4721 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134238 4721 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134242 4721 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134249 4721 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134253 4721 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134257 4721 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134261 4721 flags.go:64] FLAG: --logging-format="text" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134265 4721 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134269 4721 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134273 4721 flags.go:64] FLAG: --manifest-url="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134278 4721 flags.go:64] FLAG: --manifest-url-header="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134283 4721 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134287 4721 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134293 4721 flags.go:64] FLAG: --max-pods="110" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134297 4721 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134302 4721 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134307 4721 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134312 4721 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134317 4721 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134322 4721 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134326 4721 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134339 4721 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134344 4721 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134349 4721 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134354 4721 flags.go:64] FLAG: --pod-cidr="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134359 4721 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134368 4721 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134373 4721 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134378 4721 flags.go:64] FLAG: --pods-per-core="0" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134384 4721 flags.go:64] FLAG: --port="10250" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134390 4721 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134394 4721 flags.go:64] FLAG: --provider-id="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134399 4721 flags.go:64] FLAG: --qos-reserved="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134404 4721 flags.go:64] FLAG: --read-only-port="10255" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134409 4721 flags.go:64] FLAG: --register-node="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134414 4721 flags.go:64] FLAG: --register-schedulable="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134418 4721 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134427 4721 flags.go:64] FLAG: --registry-burst="10" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134432 4721 flags.go:64] FLAG: --registry-qps="5" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134436 4721 flags.go:64] FLAG: --reserved-cpus="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134440 4721 flags.go:64] FLAG: --reserved-memory="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134446 4721 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134450 4721 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134454 4721 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134458 4721 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134462 4721 flags.go:64] FLAG: --runonce="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134468 4721 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134472 4721 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134476 4721 flags.go:64] FLAG: --seccomp-default="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134481 4721 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134485 4721 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134490 4721 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134495 4721 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134499 4721 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134504 4721 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134508 4721 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134513 4721 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134517 4721 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134522 4721 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134526 4721 flags.go:64] FLAG: --system-cgroups="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134531 4721 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134537 4721 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134542 4721 flags.go:64] FLAG: --tls-cert-file="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134546 4721 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134551 4721 flags.go:64] FLAG: --tls-min-version="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134556 4721 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134560 4721 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134564 4721 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134568 4721 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134572 4721 flags.go:64] FLAG: --v="2" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134578 4721 flags.go:64] FLAG: --version="false" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134584 4721 flags.go:64] FLAG: --vmodule="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134589 4721 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134593 4721 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134686 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134691 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134697 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134701 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134705 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134709 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134713 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134717 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134721 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134725 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134729 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134733 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134736 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134741 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134744 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134748 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134751 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134755 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134758 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134763 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134792 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134796 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134800 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134803 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134807 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134811 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134814 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134818 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134822 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134825 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134829 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134833 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134836 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134840 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134843 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134847 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134851 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134855 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134858 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134862 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134867 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134871 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134875 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134879 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134883 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134887 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134892 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134896 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134900 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134905 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134909 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134914 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134917 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134921 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134925 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134929 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134933 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134936 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134940 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134944 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134948 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134953 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134958 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134964 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134969 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134973 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134978 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134982 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134986 4721 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134990 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.134993 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.134999 4721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.146486 4721 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.146516 4721 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146578 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146586 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146590 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146594 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146598 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146601 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146605 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146608 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146612 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146615 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146619 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146623 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146626 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146629 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146633 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146637 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146640 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146644 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146648 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146652 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146657 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146662 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146665 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146669 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146672 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146677 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146683 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146687 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146691 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146695 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146699 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146703 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146707 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146711 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146714 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146718 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146722 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146725 4721 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146729 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146732 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146736 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146739 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146743 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146747 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146750 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146753 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146788 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146793 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146797 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146802 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146806 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146810 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146815 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146819 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146823 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146828 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146832 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146837 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146841 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146847 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146852 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146858 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146863 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146869 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146874 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146879 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146883 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146887 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146890 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146894 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.146898 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.146904 4721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147015 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147022 4721 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147026 4721 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147031 4721 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147034 4721 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147038 4721 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147042 4721 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147045 4721 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147049 4721 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147053 4721 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147057 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147060 4721 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147065 4721 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147069 4721 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147073 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147078 4721 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147082 4721 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147085 4721 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147089 4721 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147308 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147319 4721 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147324 4721 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147330 4721 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147335 4721 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147340 4721 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147345 4721 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147350 4721 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147354 4721 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147359 4721 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147364 4721 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147369 4721 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147374 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147378 4721 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147384 4721 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147388 4721 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147393 4721 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147397 4721 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147401 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147405 4721 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147409 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147413 4721 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147417 4721 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147422 4721 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147426 4721 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147430 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147435 4721 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147439 4721 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147444 4721 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147448 4721 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147453 4721 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147457 4721 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147461 4721 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147466 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147471 4721 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147475 4721 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147482 4721 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147488 4721 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147493 4721 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147497 4721 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147501 4721 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147506 4721 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147510 4721 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147514 4721 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147519 4721 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147523 4721 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147527 4721 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147532 4721 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147537 4721 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147541 4721 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147546 4721 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.147550 4721 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.147558 4721 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.149748 4721 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.173581 4721 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.173700 4721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.181295 4721 server.go:997] "Starting client certificate rotation" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.181351 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.181617 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-07 20:06:40.950026734 +0000 UTC Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.181721 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.283357 4721 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.283456 4721 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.285139 4721 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.302380 4721 log.go:25] "Validated CRI v1 runtime API" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.396721 4721 log.go:25] "Validated CRI v1 image API" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.398322 4721 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.407065 4721 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-18-28-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.407106 4721 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.430044 4721 manager.go:217] Machine: {Timestamp:2026-01-28 18:33:55.426002253 +0000 UTC m=+1.151307853 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:09e691cb-0cac-419d-a3e2-104cada8c62f BootID:7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:69:cd:41 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:69:cd:41 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e8:1b:a5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:97:d3:d5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:40:3f:bd Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:de:88:81 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:e2:e4:b9:be:c6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:c9:29:b4:b2:50 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.430478 4721 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.430926 4721 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.431343 4721 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.431558 4721 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.431596 4721 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.431831 4721 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.431844 4721 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.432579 4721 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.432617 4721 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.432842 4721 state_mem.go:36] "Initialized new in-memory state store" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.432939 4721 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.440321 4721 kubelet.go:418] "Attempting to sync node with API server" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.440360 4721 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.440389 4721 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.440402 4721 kubelet.go:324] "Adding apiserver pod source" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.440415 4721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.444769 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.444850 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.444981 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.445052 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.446781 4721 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.448540 4721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.450162 4721 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453081 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453104 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453111 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453118 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453130 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453137 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453144 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453155 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453162 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453184 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453203 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.453210 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.455533 4721 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.456044 4721 server.go:1280] "Started kubelet" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.457422 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.458646 4721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.458857 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.458897 4721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.458655 4721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.459017 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:14:24.60551214 +0000 UTC Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.459407 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.459950 4721 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.459968 4721 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460027 4721 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460119 4721 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460141 4721 factory.go:55] Registering systemd factory Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460148 4721 factory.go:221] Registration of the systemd container factory successfully Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460563 4721 factory.go:153] Registering CRI-O factory Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460583 4721 factory.go:221] Registration of the crio container factory successfully Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460608 4721 factory.go:103] Registering Raw factory Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460610 4721 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.460623 4721 manager.go:1196] Started watching for new ooms in manager Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.461014 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.461064 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.461280 4721 manager.go:319] Starting recovery of all containers Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.461882 4721 server.go:460] "Adding debug handlers to kubelet server" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.462655 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="200ms" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.474352 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef8c698ec207e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:33:55.456008318 +0000 UTC m=+1.181313878,LastTimestamp:2026-01-28 18:33:55.456008318 +0000 UTC m=+1.181313878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478227 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478278 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478294 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478309 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478324 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478339 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478353 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478366 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478382 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478395 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478407 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478420 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478433 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478451 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478464 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478479 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478491 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478503 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478513 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478527 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478540 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478574 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478587 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478600 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478614 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478628 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478645 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478661 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478676 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478691 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478705 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478758 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478777 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478793 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478832 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478850 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478866 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478882 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478898 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478915 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478932 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478947 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478963 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478979 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.478994 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479010 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479025 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479041 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479056 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479069 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479084 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479098 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479115 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479128 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479141 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479152 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479162 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479194 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479207 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479224 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479246 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479258 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479269 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479281 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479294 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479306 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479317 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479328 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479344 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479355 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479365 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479377 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479389 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479401 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479411 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479421 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479433 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479443 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479478 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479493 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479504 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.479514 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480098 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480118 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480130 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480141 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480152 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480162 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480187 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480203 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480213 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480223 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480233 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480242 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480251 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480262 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480271 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480279 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480289 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480299 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480312 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480322 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480332 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480342 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480356 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480367 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480377 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480388 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480398 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480409 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480419 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480430 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480440 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480449 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480460 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480470 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480480 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480489 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480519 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480530 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480539 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480550 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480560 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480569 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480578 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480588 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480597 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480607 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480617 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480626 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480636 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480645 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480653 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480663 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480672 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480684 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480694 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480703 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480714 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480724 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480733 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480743 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480752 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480763 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480772 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480780 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480790 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480799 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480808 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480818 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480828 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480847 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480857 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480867 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.480877 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490010 4721 manager.go:324] Recovery completed Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490761 4721 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490824 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490841 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490853 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490868 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490889 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490909 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490940 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490956 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490973 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.490986 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491002 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491018 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491032 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491047 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491060 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491075 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491088 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491101 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491114 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491134 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491149 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491274 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491303 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491331 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491353 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491368 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491381 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491394 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491407 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491419 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491433 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491446 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491459 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491471 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491483 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491495 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491510 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491523 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491537 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491572 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491609 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491622 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491634 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491651 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491667 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491681 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491697 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491712 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491725 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491740 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491752 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491765 4721 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491777 4721 reconstruct.go:97] "Volume reconstruction finished" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.491786 4721 reconciler.go:26] "Reconciler: start to sync state" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.501087 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.502874 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.502926 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.502939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.503751 4721 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.503767 4721 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.503799 4721 state_mem.go:36] "Initialized new in-memory state store" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.525555 4721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.527419 4721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.527472 4721 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.527498 4721 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.527542 4721 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 18:33:55 crc kubenswrapper[4721]: W0128 18:33:55.528142 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.528235 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.560696 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.628447 4721 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.660897 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.663657 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="400ms" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.761969 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.775183 4721 policy_none.go:49] "None policy: Start" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.776448 4721 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.776477 4721 state_mem.go:35] "Initializing new in-memory state store" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.828988 4721 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.862239 4721 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.957656 4721 manager.go:334] "Starting Device Plugin manager" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.958064 4721 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.958090 4721 server.go:79] "Starting device plugin registration server" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.958687 4721 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.958715 4721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.959135 4721 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.959251 4721 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 18:33:55 crc kubenswrapper[4721]: I0128 18:33:55.959261 4721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 18:33:55 crc kubenswrapper[4721]: E0128 18:33:55.967729 4721 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.059724 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.061298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.061350 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.061359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.061386 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.061911 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.064342 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="800ms" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.229248 4721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.229397 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.230634 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.230665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.230673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.230800 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231085 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231196 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231493 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231505 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231649 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231841 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.231888 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232202 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232231 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232206 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232307 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232418 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232464 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232717 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232728 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232883 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.232998 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233094 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233126 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233337 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.233911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.234102 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.234134 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.235009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.235084 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.235147 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.262252 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.263379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.263432 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.263447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.263489 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.264139 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301546 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301601 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301638 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301672 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301739 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301792 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301812 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301830 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301848 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301870 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301901 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.301925 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.302324 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.302370 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403570 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403628 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403646 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403660 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403674 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403691 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403710 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403723 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403756 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403771 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403785 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403800 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403904 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403915 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403983 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403976 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403965 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.403915 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404053 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404068 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404105 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404110 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404208 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404262 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404254 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.404232 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.420377 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.420590 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.458852 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.459851 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:29:42.704675628 +0000 UTC Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.559444 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.577419 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.594455 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.597761 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.597895 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.605616 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.611114 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.619414 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.619523 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.637220 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fd5ecc24a89ef1cda45e811658d36133df0f2b224ce121f91200a281a62d774e WatchSource:0}: Error finding container fd5ecc24a89ef1cda45e811658d36133df0f2b224ce121f91200a281a62d774e: Status 404 returned error can't find the container with id fd5ecc24a89ef1cda45e811658d36133df0f2b224ce121f91200a281a62d774e Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.638029 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c75ca90b34eff4f431517ce8362b0a48bfce8b0b72cba7277d1a406bc382e104 WatchSource:0}: Error finding container c75ca90b34eff4f431517ce8362b0a48bfce8b0b72cba7277d1a406bc382e104: Status 404 returned error can't find the container with id c75ca90b34eff4f431517ce8362b0a48bfce8b0b72cba7277d1a406bc382e104 Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.643618 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-099f30111126e503af164918c9179769c81ea05bf9ba17c8db5e268e789dadb3 WatchSource:0}: Error finding container 099f30111126e503af164918c9179769c81ea05bf9ba17c8db5e268e789dadb3: Status 404 returned error can't find the container with id 099f30111126e503af164918c9179769c81ea05bf9ba17c8db5e268e789dadb3 Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.645199 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e54dcf00ce0c5fd8e3a0aba59de6a9fe3f2ec38cc804ebce4d78b091457b1e9e WatchSource:0}: Error finding container e54dcf00ce0c5fd8e3a0aba59de6a9fe3f2ec38cc804ebce4d78b091457b1e9e: Status 404 returned error can't find the container with id e54dcf00ce0c5fd8e3a0aba59de6a9fe3f2ec38cc804ebce4d78b091457b1e9e Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.647089 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-104094e29cbee36fc62f847c58c50f045954f0da7071c7e3fa89bd631f0bd81c WatchSource:0}: Error finding container 104094e29cbee36fc62f847c58c50f045954f0da7071c7e3fa89bd631f0bd81c: Status 404 returned error can't find the container with id 104094e29cbee36fc62f847c58c50f045954f0da7071c7e3fa89bd631f0bd81c Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.665261 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.668882 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.668973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.669000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:56 crc kubenswrapper[4721]: I0128 18:33:56.669055 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.669748 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 28 18:33:56 crc kubenswrapper[4721]: W0128 18:33:56.830853 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.831027 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4721]: E0128 18:33:56.866242 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="1.6s" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.388400 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:33:57 crc kubenswrapper[4721]: E0128 18:33:57.389891 4721 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.459188 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.460240 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:47:53.4798152 +0000 UTC Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.470727 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.472366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.472412 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.472425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.472451 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:33:57 crc kubenswrapper[4721]: E0128 18:33:57.472886 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.534441 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"099f30111126e503af164918c9179769c81ea05bf9ba17c8db5e268e789dadb3"} Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.535559 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd5ecc24a89ef1cda45e811658d36133df0f2b224ce121f91200a281a62d774e"} Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.536665 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c75ca90b34eff4f431517ce8362b0a48bfce8b0b72cba7277d1a406bc382e104"} Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.537572 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"104094e29cbee36fc62f847c58c50f045954f0da7071c7e3fa89bd631f0bd81c"} Jan 28 18:33:57 crc kubenswrapper[4721]: I0128 18:33:57.538632 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e54dcf00ce0c5fd8e3a0aba59de6a9fe3f2ec38cc804ebce4d78b091457b1e9e"} Jan 28 18:33:58 crc kubenswrapper[4721]: W0128 18:33:58.340024 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:58 crc kubenswrapper[4721]: E0128 18:33:58.340207 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:58 crc kubenswrapper[4721]: W0128 18:33:58.393750 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:58 crc kubenswrapper[4721]: E0128 18:33:58.394065 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:58 crc kubenswrapper[4721]: I0128 18:33:58.459297 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:58 crc kubenswrapper[4721]: I0128 18:33:58.461238 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:40:38.171833663 +0000 UTC Jan 28 18:33:58 crc kubenswrapper[4721]: E0128 18:33:58.467210 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="3.2s" Jan 28 18:33:58 crc kubenswrapper[4721]: W0128 18:33:58.896708 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:58 crc kubenswrapper[4721]: E0128 18:33:58.896912 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.073880 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.075455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.075543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.075571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.075635 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:33:59 crc kubenswrapper[4721]: E0128 18:33:59.076618 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.458867 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.462221 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:29:51.482591264 +0000 UTC Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.546773 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830"} Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.546896 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.548076 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.548129 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.548149 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.548784 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585"} Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.548901 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.549982 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.550029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.550045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.550390 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d"} Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.551826 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6"} Jan 28 18:33:59 crc kubenswrapper[4721]: I0128 18:33:59.553111 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042"} Jan 28 18:33:59 crc kubenswrapper[4721]: W0128 18:33:59.743724 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:33:59 crc kubenswrapper[4721]: E0128 18:33:59.743817 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.458263 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.463153 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:12:46.494841468 +0000 UTC Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.558581 4721 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6" exitCode=0 Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.558808 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.558906 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.558958 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.559559 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6"} Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.559613 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.560276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.560315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.560332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563486 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563901 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:00 crc kubenswrapper[4721]: I0128 18:34:00.563935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.457833 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.458419 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:01 crc kubenswrapper[4721]: E0128 18:34:01.459427 4721 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.463976 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:59:08.520878377 +0000 UTC Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.564269 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585" exitCode=0 Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.564374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585"} Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.564598 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.565551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.565608 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.565622 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.566682 4721 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042" exitCode=0 Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.566741 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042"} Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.566809 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.567541 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.567667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.567750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.567790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.568416 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.568442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.568444 4721 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830" exitCode=0 Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.568452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.568469 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830"} Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.568718 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.569808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.569861 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:01 crc kubenswrapper[4721]: I0128 18:34:01.569874 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:01 crc kubenswrapper[4721]: E0128 18:34:01.598502 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef8c698ec207e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:33:55.456008318 +0000 UTC m=+1.181313878,LastTimestamp:2026-01-28 18:33:55.456008318 +0000 UTC m=+1.181313878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:34:01 crc kubenswrapper[4721]: E0128 18:34:01.669036 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="6.4s" Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.277810 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.279714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.279776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.279799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.279840 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:34:02 crc kubenswrapper[4721]: E0128 18:34:02.280620 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.458531 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.464798 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:40:42.322711835 +0000 UTC Jan 28 18:34:02 crc kubenswrapper[4721]: I0128 18:34:02.573610 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf"} Jan 28 18:34:02 crc kubenswrapper[4721]: W0128 18:34:02.795073 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:02 crc kubenswrapper[4721]: E0128 18:34:02.795152 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.458734 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.465040 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:17:17.805791312 +0000 UTC Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.578398 4721 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f" exitCode=0 Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.578480 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f"} Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.578512 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.579583 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921"} Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.579647 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.582110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.582154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.582219 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.583630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.583682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.583695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.586831 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0"} Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.589254 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7"} Jan 28 18:34:03 crc kubenswrapper[4721]: I0128 18:34:03.591918 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc"} Jan 28 18:34:03 crc kubenswrapper[4721]: W0128 18:34:03.951306 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:03 crc kubenswrapper[4721]: E0128 18:34:03.951497 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:34:04 crc kubenswrapper[4721]: W0128 18:34:04.292260 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:04 crc kubenswrapper[4721]: E0128 18:34:04.292369 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:34:04 crc kubenswrapper[4721]: I0128 18:34:04.458769 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:04 crc kubenswrapper[4721]: I0128 18:34:04.466213 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:47:22.525830489 +0000 UTC Jan 28 18:34:04 crc kubenswrapper[4721]: I0128 18:34:04.597335 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8"} Jan 28 18:34:04 crc kubenswrapper[4721]: I0128 18:34:04.599907 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1"} Jan 28 18:34:04 crc kubenswrapper[4721]: W0128 18:34:04.919601 4721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:04 crc kubenswrapper[4721]: E0128 18:34:04.919780 4721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:34:05 crc kubenswrapper[4721]: I0128 18:34:05.458846 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:05 crc kubenswrapper[4721]: I0128 18:34:05.467203 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:56:16.110525146 +0000 UTC Jan 28 18:34:05 crc kubenswrapper[4721]: I0128 18:34:05.606255 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f"} Jan 28 18:34:05 crc kubenswrapper[4721]: E0128 18:34:05.967960 4721 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.459217 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.467659 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:19:49.413632044 +0000 UTC Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.613888 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05"} Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.616990 4721 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1" exitCode=0 Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.617076 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1"} Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.617124 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.617234 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.618037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.618066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.618078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.618951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.618980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:06 crc kubenswrapper[4721]: I0128 18:34:06.618991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.458282 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.468618 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:47:59.404761168 +0000 UTC Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.623250 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.623299 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.624474 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.624506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.624515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.626279 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d94fd2ab381713ed153fd84175ad96da298a40fc1fe6d1e38f14cd78918d3212"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.626312 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.626323 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.626342 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.627122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.627191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.627206 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.629818 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.629861 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.629880 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34"} Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.861776 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.862011 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.863833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.863877 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.863894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:07 crc kubenswrapper[4721]: I0128 18:34:07.971674 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:08 crc kubenswrapper[4721]: E0128 18:34:08.070488 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="7s" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.458956 4721 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.468881 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:11:39.094902031 +0000 UTC Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.635473 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7"} Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.635518 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754"} Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.635643 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.635693 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.635762 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.635929 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636806 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636949 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.636817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.637002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.637015 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.681731 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.683201 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.683262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.683276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.683325 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:34:08 crc kubenswrapper[4721]: I0128 18:34:08.686101 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.359341 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.399823 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.469524 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:11:51.276052983 +0000 UTC Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.639134 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.639234 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.639143 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640686 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640743 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.640754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:09 crc kubenswrapper[4721]: I0128 18:34:09.851053 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.469836 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:12:42.411175776 +0000 UTC Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.642461 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.642521 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.643467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.643570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.643651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.643467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.643739 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:10 crc kubenswrapper[4721]: I0128 18:34:10.643751 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:11 crc kubenswrapper[4721]: I0128 18:34:11.471234 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:16:21.866909328 +0000 UTC Jan 28 18:34:11 crc kubenswrapper[4721]: I0128 18:34:11.891247 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:11 crc kubenswrapper[4721]: I0128 18:34:11.891429 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:11 crc kubenswrapper[4721]: I0128 18:34:11.892575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:11 crc kubenswrapper[4721]: I0128 18:34:11.892646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:11 crc kubenswrapper[4721]: I0128 18:34:11.892665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:12 crc kubenswrapper[4721]: I0128 18:34:12.471707 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:50:59.239817755 +0000 UTC Jan 28 18:34:13 crc kubenswrapper[4721]: I0128 18:34:13.472673 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:20:11.12165498 +0000 UTC Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.473012 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:39:00.517682541 +0000 UTC Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.574394 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.574588 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.575766 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.575804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.575815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.579433 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.579563 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.581291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.581339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.581353 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.584922 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.652252 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.653151 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.653351 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.653368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.658890 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.892203 4721 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:34:14 crc kubenswrapper[4721]: I0128 18:34:14.892284 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:34:15 crc kubenswrapper[4721]: I0128 18:34:15.475404 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:18:59.573131199 +0000 UTC Jan 28 18:34:15 crc kubenswrapper[4721]: I0128 18:34:15.655553 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:15 crc kubenswrapper[4721]: I0128 18:34:15.656525 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:15 crc kubenswrapper[4721]: I0128 18:34:15.656554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:15 crc kubenswrapper[4721]: I0128 18:34:15.656564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:15 crc kubenswrapper[4721]: E0128 18:34:15.968041 4721 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:34:16 crc kubenswrapper[4721]: I0128 18:34:16.425959 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:16 crc kubenswrapper[4721]: I0128 18:34:16.475747 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:07:22.720100543 +0000 UTC Jan 28 18:34:16 crc kubenswrapper[4721]: I0128 18:34:16.657822 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:16 crc kubenswrapper[4721]: I0128 18:34:16.658703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:16 crc kubenswrapper[4721]: I0128 18:34:16.658750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:16 crc kubenswrapper[4721]: I0128 18:34:16.658762 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:17 crc kubenswrapper[4721]: I0128 18:34:17.476395 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:19:21.807193762 +0000 UTC Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.476628 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:58:19.451742624 +0000 UTC Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.680836 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36676->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.680891 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36676->192.168.126.11:17697: read: connection reset by peer" Jan 28 18:34:18 crc kubenswrapper[4721]: E0128 18:34:18.684195 4721 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.829649 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.829714 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.835105 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 18:34:18 crc kubenswrapper[4721]: I0128 18:34:18.835201 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.364631 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]log ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]etcd ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/priority-and-fairness-filter ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-apiextensions-informers ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-apiextensions-controllers ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/crd-informer-synced ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-system-namespaces-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 28 18:34:19 crc kubenswrapper[4721]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 28 18:34:19 crc kubenswrapper[4721]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/bootstrap-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/start-kube-aggregator-informers ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-registration-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-discovery-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]autoregister-completion ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-openapi-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 28 18:34:19 crc kubenswrapper[4721]: livez check failed Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.364723 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.477638 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:30:28.126786287 +0000 UTC Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.540681 4721 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.540759 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.689801 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.692040 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d94fd2ab381713ed153fd84175ad96da298a40fc1fe6d1e38f14cd78918d3212" exitCode=255 Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.692087 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d94fd2ab381713ed153fd84175ad96da298a40fc1fe6d1e38f14cd78918d3212"} Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.692280 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.693055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.693105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.693159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:19 crc kubenswrapper[4721]: I0128 18:34:19.693954 4721 scope.go:117] "RemoveContainer" containerID="d94fd2ab381713ed153fd84175ad96da298a40fc1fe6d1e38f14cd78918d3212" Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.478273 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:52:09.996060202 +0000 UTC Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.696717 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.698108 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b"} Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.698241 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.698810 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.698835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:20 crc kubenswrapper[4721]: I0128 18:34:20.698844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.478438 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:02:59.206763168 +0000 UTC Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.701592 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.701972 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.703594 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" exitCode=255 Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.703637 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b"} Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.703686 4721 scope.go:117] "RemoveContainer" containerID="d94fd2ab381713ed153fd84175ad96da298a40fc1fe6d1e38f14cd78918d3212" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.703807 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.704634 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.704667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.704676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:21 crc kubenswrapper[4721]: I0128 18:34:21.707878 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:21 crc kubenswrapper[4721]: E0128 18:34:21.708767 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:34:22 crc kubenswrapper[4721]: I0128 18:34:22.479493 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:50:23.313730928 +0000 UTC Jan 28 18:34:22 crc kubenswrapper[4721]: I0128 18:34:22.707076 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.480572 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:09:32.538706703 +0000 UTC Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.832647 4721 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.834252 4721 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.836734 4721 trace.go:236] Trace[1411281629]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:34:12.606) (total time: 11230ms): Jan 28 18:34:23 crc kubenswrapper[4721]: Trace[1411281629]: ---"Objects listed" error: 11230ms (18:34:23.836) Jan 28 18:34:23 crc kubenswrapper[4721]: Trace[1411281629]: [11.230197949s] [11.230197949s] END Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.836757 4721 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.837053 4721 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.837209 4721 trace.go:236] Trace[247081360]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:34:11.764) (total time: 12072ms): Jan 28 18:34:23 crc kubenswrapper[4721]: Trace[247081360]: ---"Objects listed" error: 12072ms (18:34:23.837) Jan 28 18:34:23 crc kubenswrapper[4721]: Trace[247081360]: [12.0726942s] [12.0726942s] END Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.837233 4721 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.844794 4721 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.892155 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:23 crc kubenswrapper[4721]: I0128 18:34:23.895717 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.363440 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.367287 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.382491 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.382793 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.460299 4721 apiserver.go:52] "Watching apiserver" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.464335 4721 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.464796 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.465261 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.465291 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.465320 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.465409 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.465432 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.465500 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.466080 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.466203 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.466256 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.467928 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.467979 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.468043 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.468269 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.469006 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.469029 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.469011 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.469103 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.469789 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.480715 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:21:55.428613188 +0000 UTC Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.498045 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.510293 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.523429 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.533027 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.541892 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.549872 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.561196 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.561470 4721 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.571387 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.579207 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.593251 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.599070 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.611262 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.612409 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.613036 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.623241 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.632562 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.641958 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644349 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644403 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644427 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644463 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644485 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644506 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644527 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644547 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644585 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644631 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644654 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644676 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644728 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644745 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644777 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644801 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644802 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644828 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644822 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644897 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644924 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644943 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644962 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644980 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644998 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645015 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645033 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645051 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645075 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645097 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645117 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645131 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645150 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645188 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645205 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645237 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645253 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645271 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645336 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645351 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645366 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645380 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645395 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645426 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645441 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645466 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645493 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645549 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645581 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645604 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645621 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645636 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645656 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645677 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645696 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645713 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645727 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645742 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645762 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645778 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645794 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645810 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645826 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645841 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645860 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645882 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645902 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645919 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645934 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645951 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645974 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645992 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646009 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644949 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647479 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646053 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647595 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647623 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647648 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647631 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647711 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647723 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647868 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647908 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647914 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647950 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648052 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648088 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648108 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648103 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648206 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648334 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648288 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648365 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648390 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648451 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648490 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648527 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648558 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648608 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648611 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648633 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648673 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648734 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648805 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648832 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648858 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648889 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648919 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648952 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648990 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649028 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649057 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649085 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649124 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649277 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649310 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649345 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649384 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649411 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649503 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649644 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649721 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649936 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648732 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648821 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645004 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645030 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645098 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645123 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645236 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645251 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645492 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645631 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645652 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645750 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645788 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645812 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645852 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645836 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645930 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.645936 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646083 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646096 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646112 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646127 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646131 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646142 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646295 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646317 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.646395 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647214 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647379 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.647401 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648845 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.648955 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649112 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649117 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649141 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649484 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.644951 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649503 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649603 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649762 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.649907 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.650066 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.650091 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.651328 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.651468 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.652724 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653056 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653355 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653788 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653843 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653952 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653929 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.653971 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.654013 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.654027 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.654845 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.654911 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.650505 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655197 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655236 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655291 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655319 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655382 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655416 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655445 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655495 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655527 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655583 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655611 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655620 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655667 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655696 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655749 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655779 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655865 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655892 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655913 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.655922 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656022 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656062 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656310 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656341 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656370 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656395 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656422 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656450 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656477 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656538 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656593 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656617 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656660 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656720 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656761 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656733 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656878 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656883 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656905 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656925 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656907 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656947 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656966 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.656983 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657000 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657018 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657033 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657051 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657071 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657089 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657087 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657107 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657127 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657143 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657161 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657198 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657213 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657230 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657247 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657266 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657284 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657299 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657316 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657333 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657349 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657365 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657381 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657396 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657414 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657430 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657455 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657473 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657488 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657502 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657519 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657552 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657570 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657587 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657613 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657636 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657659 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657684 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657707 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657729 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657753 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657777 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657800 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657823 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657848 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657871 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657894 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657942 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657969 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657994 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658017 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658042 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658068 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658099 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658124 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658149 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661223 4721 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663321 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663892 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657093 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657264 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657386 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657407 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657562 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657932 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.658181 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:34:25.158145441 +0000 UTC m=+30.883451001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666584 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666631 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666664 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666695 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666727 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666753 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666866 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666865 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658556 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658562 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658592 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658664 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.658740 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659314 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659383 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659391 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659408 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.657894 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659519 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659544 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.659922 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660100 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660109 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660136 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660291 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660417 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660351 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660624 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660695 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660741 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660739 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.660812 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661011 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661031 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661128 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661523 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661601 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661736 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661762 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661854 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661873 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.661905 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.662345 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.662591 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.662979 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663062 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663078 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663332 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663337 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.663750 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.664356 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.664544 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.664631 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665023 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665161 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665535 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665620 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665830 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665841 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665860 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.665847 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666092 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666132 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666136 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666285 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666435 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.652791 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.666994 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.667788 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:25.167764332 +0000 UTC m=+30.893069892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666887 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.667982 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.667999 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668014 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668029 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668042 4721 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668054 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668067 4721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668080 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668091 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668102 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668115 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668137 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668149 4721 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.666478 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668160 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668233 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668247 4721 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668258 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668270 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668282 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668276 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668294 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668307 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668310 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668318 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668344 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668357 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668368 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668376 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668458 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.668763 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:25.168749982 +0000 UTC m=+30.894055542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.667222 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.668998 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669161 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.662857 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669346 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669410 4721 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669421 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669432 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669442 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669444 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669453 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669492 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669509 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669546 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669640 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669684 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669701 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669716 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669730 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669743 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669755 4721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669767 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669780 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669792 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669804 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669817 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669832 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669841 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669848 4721 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669920 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669933 4721 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669945 4721 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669954 4721 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669953 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669964 4721 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669976 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.669972 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670001 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670065 4721 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670083 4721 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670093 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670103 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670112 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670120 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670131 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670142 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670155 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670167 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670195 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670207 4721 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670218 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670229 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670240 4721 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670253 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670265 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670266 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670278 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670288 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670297 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670307 4721 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670316 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670324 4721 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670334 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670342 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670350 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670359 4721 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670367 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670375 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670383 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670392 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670403 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670415 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670425 4721 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670437 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.670701 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.673222 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.673251 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.673265 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.673329 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:25.173308215 +0000 UTC m=+30.898613775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.676376 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.676402 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.676414 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.676463 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:25.176445063 +0000 UTC m=+30.901750623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.678101 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680217 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680332 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680664 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680789 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680734 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680928 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680929 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.680932 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.681156 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.681387 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.681406 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.681472 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.681557 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.681732 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.682273 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.682425 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.682293 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.682427 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.682762 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.682970 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.683090 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.684225 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.686979 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.688623 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.689000 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.689048 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.689219 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.689412 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.690298 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.690325 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.694370 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.705605 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.706616 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.711612 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.715661 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.717297 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.719237 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.719370 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.719551 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.719868 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:34:24 crc kubenswrapper[4721]: E0128 18:34:24.724410 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.728079 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.740621 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.749912 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.768425 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771360 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771439 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771506 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771521 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771533 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771545 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771556 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771567 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771571 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771579 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771620 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771631 4721 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771648 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771621 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771685 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771859 4721 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771885 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771902 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771917 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771933 4721 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771959 4721 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771975 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.771990 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772005 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772021 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772034 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772048 4721 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772062 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772073 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772086 4721 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772099 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772128 4721 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772144 4721 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772158 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772191 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772208 4721 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772221 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772237 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772248 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772261 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772274 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772289 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772308 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772332 4721 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772344 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772357 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772371 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772384 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772396 4721 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772408 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772420 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772432 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772444 4721 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772459 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772471 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772483 4721 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772501 4721 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772513 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772524 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772535 4721 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772547 4721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772560 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772572 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772584 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772595 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772607 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772619 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772630 4721 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772643 4721 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772656 4721 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772668 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772680 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772694 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772707 4721 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772721 4721 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772734 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772750 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772764 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772779 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772794 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772808 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772819 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772833 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772845 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772859 4721 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772872 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772885 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772898 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772909 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772919 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772940 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772949 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772958 4721 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772967 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772980 4721 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.772991 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773002 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773011 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773020 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773031 4721 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773041 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773050 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773060 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773069 4721 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773079 4721 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773090 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.773101 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.778420 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.782143 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.782445 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.790162 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:34:24 crc kubenswrapper[4721]: W0128 18:34:24.795491 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-6045b6f0bee6f2e719837c88baf1d9bc079549e167abb23df6f8a672dc7323d5 WatchSource:0}: Error finding container 6045b6f0bee6f2e719837c88baf1d9bc079549e167abb23df6f8a672dc7323d5: Status 404 returned error can't find the container with id 6045b6f0bee6f2e719837c88baf1d9bc079549e167abb23df6f8a672dc7323d5 Jan 28 18:34:24 crc kubenswrapper[4721]: W0128 18:34:24.795836 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-252d432d491d4c107ba4d25567e76c91b9966e7f9a65cd157f8aec2ab1765bcb WatchSource:0}: Error finding container 252d432d491d4c107ba4d25567e76c91b9966e7f9a65cd157f8aec2ab1765bcb: Status 404 returned error can't find the container with id 252d432d491d4c107ba4d25567e76c91b9966e7f9a65cd157f8aec2ab1765bcb Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.797269 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.812021 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.824679 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:24 crc kubenswrapper[4721]: I0128 18:34:24.834622 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.176661 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.176769 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.176824 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.176866 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.176895 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.176902 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:34:26.17686938 +0000 UTC m=+31.902174970 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.176952 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.176972 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:26.176958353 +0000 UTC m=+31.902263913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.176999 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177049 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:26.177034835 +0000 UTC m=+31.902340415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177097 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177122 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177139 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177140 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177160 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177199 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177243 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:26.177219301 +0000 UTC m=+31.902524881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.177269 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:26.177258752 +0000 UTC m=+31.902564332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.481468 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:13:24.204477284 +0000 UTC Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.531604 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.532589 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.533855 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.534542 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.535541 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.536030 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.536775 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.537718 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.538374 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.539478 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.540290 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.541552 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.542049 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.542583 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.543476 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.543978 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.544532 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.544929 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.545396 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.545939 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.546950 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.547418 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.548373 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.548825 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.549846 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.550359 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.550940 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.551987 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.552638 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.553749 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.554270 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.555403 4721 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.555508 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.557039 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.558164 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.558675 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.558957 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.560430 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.561070 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.561960 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.562596 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.563606 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.564069 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.565002 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.565659 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.566744 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.567398 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.568289 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.568853 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.569930 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.570541 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.571620 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.572217 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.572345 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.573373 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.574073 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.574763 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.586973 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.600698 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.618815 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.631926 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.642854 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.653840 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.684530 4721 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.685957 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.686004 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.686020 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.686119 4721 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.693645 4721 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.693730 4721 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.694554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.694600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.694612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.694629 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.694641 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.709825 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.713254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.713276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.713285 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.713298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.713308 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.715436 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.715477 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6045b6f0bee6f2e719837c88baf1d9bc079549e167abb23df6f8a672dc7323d5"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.716356 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fe177a883c2452961339ac55b97fd612d8278e02e38023a731263e7e6b19496b"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.718255 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.718280 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.718330 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"252d432d491d4c107ba4d25567e76c91b9966e7f9a65cd157f8aec2ab1765bcb"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.720112 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.720299 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.729664 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.730727 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.733917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.733961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.733971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.733985 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.733997 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.743535 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.745040 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.751235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.751273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.751282 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.751297 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.751307 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.760454 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.763347 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.766461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.766493 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.766619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.766641 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.766655 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.772697 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.777435 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: E0128 18:34:25.777584 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.779205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.779239 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.779248 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.779262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.779272 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.785245 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.796420 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.812582 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.824584 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.839613 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.850155 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.866001 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.880802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.880842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.880853 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.880871 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.880883 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.883276 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.892772 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.902794 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.914162 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.925254 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.936693 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.947974 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.982663 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.982699 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.982710 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.982727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:25 crc kubenswrapper[4721]: I0128 18:34:25.982740 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:25Z","lastTransitionTime":"2026-01-28T18:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.085065 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.085098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.085109 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.085126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.085136 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.185739 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.185837 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.185867 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.185892 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.185920 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186014 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186019 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186043 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186053 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186061 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186071 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:28.186053133 +0000 UTC m=+33.911358693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186125 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:28.186114525 +0000 UTC m=+33.911420085 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186019 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186162 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:28.186154666 +0000 UTC m=+33.911460226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186191 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186221 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186283 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:28.18627114 +0000 UTC m=+33.911576710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.186776 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:34:28.186744055 +0000 UTC m=+33.912049615 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.187017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.187055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.187067 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.187082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.187093 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.289964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.290008 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.290017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.290064 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.290081 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.392237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.392263 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.392270 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.392284 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.392292 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.482488 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:25:13.932301036 +0000 UTC Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.494356 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.494453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.494467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.494484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.494493 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.527958 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.527989 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.528023 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.528101 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.528157 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:26 crc kubenswrapper[4721]: E0128 18:34:26.528253 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.596554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.596606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.596615 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.596628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.596637 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.698938 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.698970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.698979 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.698994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.699006 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.801314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.801376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.801387 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.801406 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.801740 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.903695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.903726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.903734 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.903747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:26 crc kubenswrapper[4721]: I0128 18:34:26.903756 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:26Z","lastTransitionTime":"2026-01-28T18:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.005646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.005685 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.005714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.005733 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.005749 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.107555 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.107588 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.107599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.107611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.107619 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.210480 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.210530 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.210547 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.210564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.210583 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.313314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.313367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.313382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.313400 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.313415 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.415509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.415554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.415563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.415579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.415588 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.483429 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:58:48.785541968 +0000 UTC Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.517714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.517775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.517785 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.517798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.517808 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.620130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.620198 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.620210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.620226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.620238 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.722512 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.722551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.722567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.722585 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.722595 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.723647 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.736636 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.747861 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.758398 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.770236 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.783869 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.795950 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.814348 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.825209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.825262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.825273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.825290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.825302 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.828642 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.841340 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:27Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.928344 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.928374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.928383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.928397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.928405 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:27Z","lastTransitionTime":"2026-01-28T18:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.971967 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:27 crc kubenswrapper[4721]: I0128 18:34:27.972595 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:27 crc kubenswrapper[4721]: E0128 18:34:27.972766 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.030973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.031009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.031018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.031037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.031049 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.132844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.132881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.132889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.132902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.132913 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.205735 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.205798 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.205820 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.205838 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.205855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.205939 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.205980 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:32.205967959 +0000 UTC m=+37.931273519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206339 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206364 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206380 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206416 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:32.206405723 +0000 UTC m=+37.931711273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206433 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206453 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:34:32.206424683 +0000 UTC m=+37.931730243 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206478 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206502 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206508 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:32.206494486 +0000 UTC m=+37.931800086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206516 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.206549 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:32.206538886 +0000 UTC m=+37.931844506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.235141 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.235208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.235219 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.235236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.235245 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.337758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.337799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.337808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.337821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.337832 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.440091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.440124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.440132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.440145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.440154 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.484099 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:39:18.255839341 +0000 UTC Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.527773 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.527792 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.527870 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.527975 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.528063 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:28 crc kubenswrapper[4721]: E0128 18:34:28.528496 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.542426 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.542456 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.542466 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.542480 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.542492 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.644227 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.644271 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.644281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.644297 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.644307 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.746510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.746550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.746559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.746581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.746598 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.848752 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.848803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.848816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.848878 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.848895 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.952159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.952232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.952240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.952260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:28 crc kubenswrapper[4721]: I0128 18:34:28.952270 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:28Z","lastTransitionTime":"2026-01-28T18:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.054661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.054696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.054706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.054718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.054727 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.157257 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.157322 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.157332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.157346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.157355 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.259433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.259469 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.259528 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.259554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.259584 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.362096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.362134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.362147 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.362162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.362186 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.464681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.464713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.464722 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.464735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.464743 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.485290 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:27:40.139391068 +0000 UTC Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.540283 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.540837 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:29 crc kubenswrapper[4721]: E0128 18:34:29.540974 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.566990 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.567036 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.567046 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.567064 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.567075 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.669280 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.669319 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.669329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.669346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.669354 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.771909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.771976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.771991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.772006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.772017 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.874571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.874603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.874611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.874624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.874636 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.977273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.977327 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.977339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.977352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:29 crc kubenswrapper[4721]: I0128 18:34:29.977379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:29Z","lastTransitionTime":"2026-01-28T18:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.049244 4721 csr.go:261] certificate signing request csr-6m8w2 is approved, waiting to be issued Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.069843 4721 csr.go:257] certificate signing request csr-6m8w2 is issued Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.079607 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.079646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.079655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.079671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.079680 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.181890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.181992 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.182006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.182033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.182049 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.284574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.284625 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.284640 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.284660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.284673 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.386606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.386649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.386659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.386701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.386713 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.485678 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:59:21.589513933 +0000 UTC Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.490076 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.490128 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.490139 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.490157 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.490183 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.491075 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-lf92l"] Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.491613 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.493667 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.496627 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.497286 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.528591 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.528621 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.528681 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:30 crc kubenswrapper[4721]: E0128 18:34:30.528738 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:30 crc kubenswrapper[4721]: E0128 18:34:30.528834 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:30 crc kubenswrapper[4721]: E0128 18:34:30.528924 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.530934 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqr8\" (UniqueName: \"kubernetes.io/projected/20d04cbd-fcf1-4d48-9cca-1dd29b13c938-kube-api-access-mvqr8\") pod \"node-resolver-lf92l\" (UID: \"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\") " pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.530994 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20d04cbd-fcf1-4d48-9cca-1dd29b13c938-hosts-file\") pod \"node-resolver-lf92l\" (UID: \"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\") " pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.538005 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.573767 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.592443 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.592487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.592497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.592510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.592519 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.599250 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.614999 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.628759 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.631501 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvqr8\" (UniqueName: \"kubernetes.io/projected/20d04cbd-fcf1-4d48-9cca-1dd29b13c938-kube-api-access-mvqr8\") pod \"node-resolver-lf92l\" (UID: \"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\") " pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.631553 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20d04cbd-fcf1-4d48-9cca-1dd29b13c938-hosts-file\") pod \"node-resolver-lf92l\" (UID: \"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\") " pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.631653 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20d04cbd-fcf1-4d48-9cca-1dd29b13c938-hosts-file\") pod \"node-resolver-lf92l\" (UID: \"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\") " pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.647367 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvqr8\" (UniqueName: \"kubernetes.io/projected/20d04cbd-fcf1-4d48-9cca-1dd29b13c938-kube-api-access-mvqr8\") pod \"node-resolver-lf92l\" (UID: \"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\") " pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.648594 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.661651 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.677217 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.693451 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.694729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.694761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.694771 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.694785 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.694796 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.708549 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.796613 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.796660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.796671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.796689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.796701 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.805846 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lf92l" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900401 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-76rx2"] Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900740 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900776 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.900773 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:30Z","lastTransitionTime":"2026-01-28T18:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.902980 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.903085 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.903929 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr282"] Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.904806 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7vsph"] Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.905096 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.905817 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rgqdt"] Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.906052 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.906123 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.908404 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.908439 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.908483 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.908696 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.908751 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.909114 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.912527 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.912561 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.912578 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.912653 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.912822 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.912929 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.913144 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.913226 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.917423 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.917678 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.917955 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.929939 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.933898 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-mcd-auth-proxy-config\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.933949 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-os-release\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.933972 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-etc-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.933994 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cnibin\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934016 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-conf-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934038 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-kubelet\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934056 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-slash\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934080 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-var-lib-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934106 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934127 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84mk\" (UniqueName: \"kubernetes.io/projected/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-kube-api-access-s84mk\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934148 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-hostroot\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934194 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-proxy-tls\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934216 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934240 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70686e42-b434-4ff9-9753-cfc870beef82-ovn-node-metrics-cert\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934264 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-script-lib\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934287 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-os-release\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934313 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-socket-dir-parent\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934338 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-etc-kubernetes\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934363 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-systemd-units\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934391 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-system-cni-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934413 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-cni-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934436 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0a22020-3f34-4895-beec-2ed5d829ea79-cni-binary-copy\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934460 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-daemon-config\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934483 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-node-log\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934505 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-system-cni-dir\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934521 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkbp\" (UniqueName: \"kubernetes.io/projected/70686e42-b434-4ff9-9753-cfc870beef82-kube-api-access-7lkbp\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934540 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-bin\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934556 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-cnibin\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934573 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-cni-bin\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934588 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-cni-multus\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934614 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-log-socket\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934631 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-netd\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934651 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-config\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934667 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-netns\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934684 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l86pm\" (UniqueName: \"kubernetes.io/projected/c0a22020-3f34-4895-beec-2ed5d829ea79-kube-api-access-l86pm\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934727 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4cj5\" (UniqueName: \"kubernetes.io/projected/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-kube-api-access-d4cj5\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934751 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934774 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934800 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-kubelet\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934823 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cni-binary-copy\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934847 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-ovn\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934863 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-env-overrides\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934877 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-systemd\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934919 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-k8s-cni-cncf-io\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934955 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-rootfs\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934973 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-netns\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.934990 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-multus-certs\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.952662 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.966878 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:30 crc kubenswrapper[4721]: I0128 18:34:30.993048 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.007140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.007208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.007225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.007247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.007262 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.019031 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.034311 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035539 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-bin\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035567 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-cnibin\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035586 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-cni-bin\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-cni-multus\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035619 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-netd\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035633 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-config\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035649 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-netns\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035665 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l86pm\" (UniqueName: \"kubernetes.io/projected/c0a22020-3f34-4895-beec-2ed5d829ea79-kube-api-access-l86pm\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035691 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-log-socket\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035708 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4cj5\" (UniqueName: \"kubernetes.io/projected/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-kube-api-access-d4cj5\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035717 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-cni-bin\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035724 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035753 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035772 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035794 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-netd\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035853 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cni-binary-copy\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-kubelet\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035987 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-log-socket\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035687 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-cni-multus\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036005 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-ovn\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036098 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-env-overrides\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036127 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036155 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-ovn\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036192 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-var-lib-kubelet\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036189 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-k8s-cni-cncf-io\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036224 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-systemd\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036241 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036256 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-multus-certs\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036289 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-rootfs\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036319 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-netns\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-mcd-auth-proxy-config\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036364 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-os-release\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036380 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-etc-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036395 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cnibin\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036410 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-conf-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036438 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-kubelet\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036465 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-slash\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036482 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-var-lib-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036498 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-hostroot\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036513 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036529 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s84mk\" (UniqueName: \"kubernetes.io/projected/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-kube-api-access-s84mk\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036544 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-proxy-tls\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036560 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036577 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-socket-dir-parent\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-etc-kubernetes\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036607 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-systemd-units\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036612 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-config\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036666 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-systemd\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036623 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cni-binary-copy\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.035779 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-netns\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036702 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-netns\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70686e42-b434-4ff9-9753-cfc870beef82-ovn-node-metrics-cert\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036748 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-script-lib\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036777 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-os-release\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0a22020-3f34-4895-beec-2ed5d829ea79-cni-binary-copy\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036825 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-system-cni-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036846 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-cni-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036868 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-node-log\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036891 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-daemon-config\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036917 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lkbp\" (UniqueName: \"kubernetes.io/projected/70686e42-b434-4ff9-9753-cfc870beef82-kube-api-access-7lkbp\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036942 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-system-cni-dir\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-etc-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037018 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-system-cni-dir\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037023 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-multus-certs\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036224 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-host-run-k8s-cni-cncf-io\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037050 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-os-release\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037076 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-node-log\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037080 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-os-release\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037115 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-system-cni-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037279 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-cni-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036891 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-env-overrides\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037455 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-bin\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037591 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-mcd-auth-proxy-config\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037720 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037770 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.036669 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-conf-dir\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037798 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cnibin\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037801 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-var-lib-openvswitch\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037820 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-kubelet\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037901 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-rootfs\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037949 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-etc-kubernetes\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037961 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-slash\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037974 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-socket-dir-parent\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.037996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-systemd-units\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.038079 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-cnibin\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.038198 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c0a22020-3f34-4895-beec-2ed5d829ea79-hostroot\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.038436 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.038555 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-script-lib\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.038801 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0a22020-3f34-4895-beec-2ed5d829ea79-cni-binary-copy\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.038894 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c0a22020-3f34-4895-beec-2ed5d829ea79-multus-daemon-config\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.040124 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70686e42-b434-4ff9-9753-cfc870beef82-ovn-node-metrics-cert\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.051344 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-proxy-tls\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.052720 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l86pm\" (UniqueName: \"kubernetes.io/projected/c0a22020-3f34-4895-beec-2ed5d829ea79-kube-api-access-l86pm\") pod \"multus-rgqdt\" (UID: \"c0a22020-3f34-4895-beec-2ed5d829ea79\") " pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.056914 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lkbp\" (UniqueName: \"kubernetes.io/projected/70686e42-b434-4ff9-9753-cfc870beef82-kube-api-access-7lkbp\") pod \"ovnkube-node-wr282\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.062310 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4cj5\" (UniqueName: \"kubernetes.io/projected/6e3427a4-9a03-4a08-bf7f-7a5e96290ad6-kube-api-access-d4cj5\") pod \"machine-config-daemon-76rx2\" (UID: \"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\") " pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.064951 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s84mk\" (UniqueName: \"kubernetes.io/projected/942c0bcf-8f75-42e8-a5c0-af4c640eb13c-kube-api-access-s84mk\") pod \"multus-additional-cni-plugins-7vsph\" (UID: \"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\") " pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.070802 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.071082 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 18:29:30 +0000 UTC, rotation deadline is 2026-12-20 17:41:51.265935915 +0000 UTC Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.071120 4721 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7823h7m20.194818353s for next certificate rotation Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.102498 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.111519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.111556 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.111567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.111582 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.111594 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.141662 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.162257 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.179325 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.204705 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.213657 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.213692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.213703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.213720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.213731 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.219501 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.223610 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.231934 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.233542 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: W0128 18:34:31.234098 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e3427a4_9a03_4a08_bf7f_7a5e96290ad6.slice/crio-e1b76b0df40cb85ea27c73d7de7cb331552043014053ad184706d14a6e42b888 WatchSource:0}: Error finding container e1b76b0df40cb85ea27c73d7de7cb331552043014053ad184706d14a6e42b888: Status 404 returned error can't find the container with id e1b76b0df40cb85ea27c73d7de7cb331552043014053ad184706d14a6e42b888 Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.242897 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7vsph" Jan 28 18:34:31 crc kubenswrapper[4721]: W0128 18:34:31.245789 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70686e42_b434_4ff9_9753_cfc870beef82.slice/crio-f09b4c32b88c09bbbda6325a1c46dc1a2127a8c6ad924249908667da133345b2 WatchSource:0}: Error finding container f09b4c32b88c09bbbda6325a1c46dc1a2127a8c6ad924249908667da133345b2: Status 404 returned error can't find the container with id f09b4c32b88c09bbbda6325a1c46dc1a2127a8c6ad924249908667da133345b2 Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.248943 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.249255 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rgqdt" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.259852 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: W0128 18:34:31.261717 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod942c0bcf_8f75_42e8_a5c0_af4c640eb13c.slice/crio-4f486d38188cf3afecf58d9b4868e503774903a98b82a66c0423a41b28f6a6a1 WatchSource:0}: Error finding container 4f486d38188cf3afecf58d9b4868e503774903a98b82a66c0423a41b28f6a6a1: Status 404 returned error can't find the container with id 4f486d38188cf3afecf58d9b4868e503774903a98b82a66c0423a41b28f6a6a1 Jan 28 18:34:31 crc kubenswrapper[4721]: W0128 18:34:31.263457 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0a22020_3f34_4895_beec_2ed5d829ea79.slice/crio-53bfba68d93f2a2ddc6494887fc2459b1987d7bb83678bbd536aae6ef13d2b54 WatchSource:0}: Error finding container 53bfba68d93f2a2ddc6494887fc2459b1987d7bb83678bbd536aae6ef13d2b54: Status 404 returned error can't find the container with id 53bfba68d93f2a2ddc6494887fc2459b1987d7bb83678bbd536aae6ef13d2b54 Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.279784 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.293659 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.306917 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.317550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.317580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.317589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.317604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.317614 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.322516 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.339870 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.360197 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.373163 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.384626 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.400979 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.419688 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.419723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.419732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.419746 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.419757 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.486650 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:37:10.452832058 +0000 UTC Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.521892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.521972 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.521986 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.522016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.522033 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.624089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.624123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.624131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.624144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.624155 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.725927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.725972 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.725983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.725998 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.726010 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.736337 4721 generic.go:334] "Generic (PLEG): container finished" podID="942c0bcf-8f75-42e8-a5c0-af4c640eb13c" containerID="d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43" exitCode=0 Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.736404 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerDied","Data":"d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.736439 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerStarted","Data":"4f486d38188cf3afecf58d9b4868e503774903a98b82a66c0423a41b28f6a6a1"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.738298 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.738387 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.738401 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"e1b76b0df40cb85ea27c73d7de7cb331552043014053ad184706d14a6e42b888"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.740227 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" exitCode=0 Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.740309 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.740345 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"f09b4c32b88c09bbbda6325a1c46dc1a2127a8c6ad924249908667da133345b2"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.741468 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lf92l" event={"ID":"20d04cbd-fcf1-4d48-9cca-1dd29b13c938","Type":"ContainerStarted","Data":"709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.741498 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lf92l" event={"ID":"20d04cbd-fcf1-4d48-9cca-1dd29b13c938","Type":"ContainerStarted","Data":"51f6ab36530e339634d0370d595fc14a3ea18b604689dd388c2a841aa1adb346"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.742990 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerStarted","Data":"9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.743029 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerStarted","Data":"53bfba68d93f2a2ddc6494887fc2459b1987d7bb83678bbd536aae6ef13d2b54"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.759548 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.773596 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.786120 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.798802 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.814349 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.828461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.828507 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.828518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.828534 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.828545 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.834076 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.849003 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.862328 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.872609 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.887758 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.899441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.913306 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.926940 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.930857 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.930895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.930904 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.930919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.930927 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:31Z","lastTransitionTime":"2026-01-28T18:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.937737 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.950640 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.966377 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.978126 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:31 crc kubenswrapper[4721]: I0128 18:34:31.988214 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.001987 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.024852 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.033229 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.033393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.033484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.033573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.033641 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.037367 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.050653 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.064359 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.079235 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.098845 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.112161 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.123049 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.135856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.135900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.135915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.135939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.135952 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.138528 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.238225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.238261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.238274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.238293 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.238305 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.249298 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.249430 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.249639 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.249902 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.249925 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.249985 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:40.24996734 +0000 UTC m=+45.975272900 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.250049 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250145 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:34:40.250062083 +0000 UTC m=+45.975367643 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250262 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250390 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250407 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250451 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:40.250437755 +0000 UTC m=+45.975743315 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.250272 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.250493 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250558 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250590 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:40.25058097 +0000 UTC m=+45.975886620 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250780 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.250896 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:40.250885309 +0000 UTC m=+45.976190869 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.341443 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.341480 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.341489 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.341508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.341521 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.430816 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rk2l2"] Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.431253 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.433934 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.433938 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.434088 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.434513 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.443858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.443900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.443911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.443927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.443938 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.449113 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.465352 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.477266 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.487045 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:04:50.360328056 +0000 UTC Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.491849 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.503544 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.516278 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.528060 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.528093 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.528077 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.528123 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.528280 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.528208 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:32 crc kubenswrapper[4721]: E0128 18:34:32.528407 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.540004 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.546402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.546437 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.546447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.546463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.546475 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.554124 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bdd30376-1599-4efc-bb55-7585e8702b60-host\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.554198 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknj5\" (UniqueName: \"kubernetes.io/projected/bdd30376-1599-4efc-bb55-7585e8702b60-kube-api-access-wknj5\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.554222 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdd30376-1599-4efc-bb55-7585e8702b60-serviceca\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.558325 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.572573 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.596113 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.609864 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.627905 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.644696 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.649847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.649889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.649898 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.649915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.649927 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.655128 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.655430 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknj5\" (UniqueName: \"kubernetes.io/projected/bdd30376-1599-4efc-bb55-7585e8702b60-kube-api-access-wknj5\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.655499 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdd30376-1599-4efc-bb55-7585e8702b60-serviceca\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.655554 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bdd30376-1599-4efc-bb55-7585e8702b60-host\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.655623 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bdd30376-1599-4efc-bb55-7585e8702b60-host\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.656584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdd30376-1599-4efc-bb55-7585e8702b60-serviceca\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.673982 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknj5\" (UniqueName: \"kubernetes.io/projected/bdd30376-1599-4efc-bb55-7585e8702b60-kube-api-access-wknj5\") pod \"node-ca-rk2l2\" (UID: \"bdd30376-1599-4efc-bb55-7585e8702b60\") " pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.744534 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rk2l2" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.752000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.752042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.752052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.752069 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.752083 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.756722 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.756759 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.756769 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.756778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.756786 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.756794 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.758273 4721 generic.go:334] "Generic (PLEG): container finished" podID="942c0bcf-8f75-42e8-a5c0-af4c640eb13c" containerID="e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942" exitCode=0 Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.758302 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerDied","Data":"e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942"} Jan 28 18:34:32 crc kubenswrapper[4721]: W0128 18:34:32.762049 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdd30376_1599_4efc_bb55_7585e8702b60.slice/crio-21e98732b7550ee73fc54049ba376c1b060f798c4e747fe08d3aa7e9416538b1 WatchSource:0}: Error finding container 21e98732b7550ee73fc54049ba376c1b060f798c4e747fe08d3aa7e9416538b1: Status 404 returned error can't find the container with id 21e98732b7550ee73fc54049ba376c1b060f798c4e747fe08d3aa7e9416538b1 Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.769728 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.790081 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.809308 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.829351 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.844806 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.858872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.858921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.858933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.859075 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.859098 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.859889 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.873617 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.894035 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.911036 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.924964 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.942431 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.960516 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.962089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.962134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.962148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.962185 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.962199 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:32Z","lastTransitionTime":"2026-01-28T18:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.976159 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.987941 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:32 crc kubenswrapper[4721]: I0128 18:34:32.998437 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.063886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.063921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.063930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.063954 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.063966 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.165840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.165870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.165879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.165891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.165901 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.267675 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.267716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.267727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.267744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.267764 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.370483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.370514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.370523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.370538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.370548 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.473261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.473303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.473315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.473333 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.473344 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.487149 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:30:34.91839423 +0000 UTC Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.575765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.575803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.575822 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.575840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.575851 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.678712 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.678751 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.678763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.678780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.678800 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.764134 4721 generic.go:334] "Generic (PLEG): container finished" podID="942c0bcf-8f75-42e8-a5c0-af4c640eb13c" containerID="a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a" exitCode=0 Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.764222 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerDied","Data":"a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.765919 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rk2l2" event={"ID":"bdd30376-1599-4efc-bb55-7585e8702b60","Type":"ContainerStarted","Data":"aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.765967 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rk2l2" event={"ID":"bdd30376-1599-4efc-bb55-7585e8702b60","Type":"ContainerStarted","Data":"21e98732b7550ee73fc54049ba376c1b060f798c4e747fe08d3aa7e9416538b1"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.778192 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.780548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.780589 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.780600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.780617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.780631 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.790104 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.802227 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.813848 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.836069 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.849401 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.868338 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.878853 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.882726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.882766 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.882775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.882790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.882799 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.893108 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.909119 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.921179 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.933744 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.945750 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.955786 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.965768 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.975901 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.986346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.986393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.986404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.986421 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.986473 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:33Z","lastTransitionTime":"2026-01-28T18:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:33 crc kubenswrapper[4721]: I0128 18:34:33.993616 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:33Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.007123 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.024503 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.036754 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.048268 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.059002 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.071044 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.083641 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.088433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.088473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.088484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.088498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.088507 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.095285 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.109426 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.121666 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.132635 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.144587 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.155577 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.190655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.190698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.190711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.190728 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.190738 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.294010 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.294051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.294061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.294075 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.294085 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.396298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.396338 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.396349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.396368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.396379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.488267 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:06:59.345133709 +0000 UTC Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.498824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.498872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.498885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.498902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.498913 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.528154 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:34 crc kubenswrapper[4721]: E0128 18:34:34.528287 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.528321 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.528343 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:34 crc kubenswrapper[4721]: E0128 18:34:34.528408 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:34 crc kubenswrapper[4721]: E0128 18:34:34.528497 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.600758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.600797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.600809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.600825 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.600837 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.703823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.703875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.703888 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.703903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.703912 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.775374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.777816 4721 generic.go:334] "Generic (PLEG): container finished" podID="942c0bcf-8f75-42e8-a5c0-af4c640eb13c" containerID="ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092" exitCode=0 Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.777851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerDied","Data":"ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.789105 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.805291 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.806139 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.806191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.806203 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.806221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.806233 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.819027 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.831795 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.851493 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.863523 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.883281 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.896559 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.908283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.908325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.908337 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.908353 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.908868 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:34Z","lastTransitionTime":"2026-01-28T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.909716 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.920841 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.931109 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.945533 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.957136 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.969838 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:34 crc kubenswrapper[4721]: I0128 18:34:34.985525 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:34Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.011234 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.011562 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.011575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.011592 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.011605 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.114005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.114040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.114050 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.114066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.114076 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.182228 4721 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.217559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.217594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.217603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.217618 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.217626 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.319838 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.319882 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.319891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.319906 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.319916 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.422108 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.422158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.422189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.422208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.422220 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.488456 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:08:21.483282719 +0000 UTC Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.523876 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.523917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.523928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.523946 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.523958 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.540596 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.554413 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.566962 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.578058 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.588495 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.627493 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.627703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.627721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.627739 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.627753 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.629221 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.649683 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.674715 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.687673 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.698428 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.709487 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.722207 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.730388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.730432 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.730445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.730462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.730475 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.734893 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.745234 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.758506 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.784376 4721 generic.go:334] "Generic (PLEG): container finished" podID="942c0bcf-8f75-42e8-a5c0-af4c640eb13c" containerID="1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062" exitCode=0 Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.784457 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerDied","Data":"1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.798246 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.810992 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.822849 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.832827 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.832915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.832929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.833328 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.833352 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.844493 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.856758 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.879006 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.891831 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.905428 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.918158 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.928680 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.936101 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.936128 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.936136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.936148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.936157 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.941817 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.952690 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.962762 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.970800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.970840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.970852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.970868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.970880 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.975254 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: E0128 18:34:35.983230 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.986714 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.987479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.987584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.987677 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.987767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:35 crc kubenswrapper[4721]: I0128 18:34:35.987825 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:35Z","lastTransitionTime":"2026-01-28T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.000199 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.003808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.003844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.003854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.003868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.003879 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.016889 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.019889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.019924 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.019934 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.019946 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.019956 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.031327 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.033951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.033989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.034009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.034026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.034038 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.046852 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.046995 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.050986 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.051119 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.051151 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.051183 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.051201 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.153120 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.153160 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.153183 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.153199 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.153210 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.255256 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.255293 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.255302 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.255317 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.255328 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.357394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.357429 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.357438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.357451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.357459 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.459104 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.459144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.459153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.459192 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.459203 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.488806 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:42:52.878778478 +0000 UTC Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.528455 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.528561 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.528832 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.528576 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.528945 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:36 crc kubenswrapper[4721]: E0128 18:34:36.529056 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.563830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.563861 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.563872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.563889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.563899 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.666318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.666367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.666378 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.666392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.666403 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.768269 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.768325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.768338 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.768355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.768366 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.790944 4721 generic.go:334] "Generic (PLEG): container finished" podID="942c0bcf-8f75-42e8-a5c0-af4c640eb13c" containerID="7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24" exitCode=0 Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.790981 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerDied","Data":"7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.803119 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.815389 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.827829 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.842060 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.864857 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.871243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.871274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.871287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.871316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.871330 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.878844 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.897770 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.910872 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.924604 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.937371 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.946423 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.959389 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.975007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.975074 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.975086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.975102 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.975117 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:36Z","lastTransitionTime":"2026-01-28T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.975452 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.987341 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:36 crc kubenswrapper[4721]: I0128 18:34:36.999630 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:36Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.077464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.077496 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.077504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.077519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.077529 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.179982 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.180027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.180036 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.180052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.180061 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.282569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.282600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.282612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.282628 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.282640 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.384562 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.384602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.384611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.384625 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.384634 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.489108 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:53:45.775197129 +0000 UTC Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.489688 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.489737 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.489746 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.489763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.489772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.591994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.592031 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.592042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.592058 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.592071 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.694221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.694256 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.694267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.694281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.694290 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.796115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.796143 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.796151 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.796180 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.796190 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.798739 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" event={"ID":"942c0bcf-8f75-42e8-a5c0-af4c640eb13c","Type":"ContainerStarted","Data":"8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.804386 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.804565 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.804580 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.813364 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.825671 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.828338 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.837741 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.848322 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.856448 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.868313 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.886877 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.898511 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.898544 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.898553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.898568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.898576 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:37Z","lastTransitionTime":"2026-01-28T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.899923 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.911405 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.922401 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.933939 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.951902 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.968945 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.978101 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:37 crc kubenswrapper[4721]: I0128 18:34:37.990190 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.001215 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.001261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.001275 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.001295 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.001309 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.005020 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.015875 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.030417 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.043064 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.053776 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.063926 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.073601 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.082759 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.100842 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.103194 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.103224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.103235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.103249 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.103258 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.114060 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.126382 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.141572 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.153547 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.172533 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.185385 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.205284 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.205316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.205325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.205338 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.205348 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.307868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.307911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.307919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.307933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.307943 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.410437 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.410786 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.410798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.410817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.410831 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.490222 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:48:59.040411217 +0000 UTC Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.513519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.513570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.513581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.513598 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.513609 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.527847 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.527867 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:38 crc kubenswrapper[4721]: E0128 18:34:38.527967 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:38 crc kubenswrapper[4721]: E0128 18:34:38.528045 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.527864 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:38 crc kubenswrapper[4721]: E0128 18:34:38.528138 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.616026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.616068 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.616078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.616093 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.616103 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.718450 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.718487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.718498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.718517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.718527 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.808952 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.821465 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.821553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.821564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.821578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.821587 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.872850 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.886433 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.898847 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.911729 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.924895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.925099 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.925159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.925244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.925311 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:38Z","lastTransitionTime":"2026-01-28T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.926881 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.946936 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.961592 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.979948 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:38 crc kubenswrapper[4721]: I0128 18:34:38.993567 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.008745 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.020989 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.027275 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.027324 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.027335 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.027360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.027376 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.033930 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.045855 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.058581 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.070063 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.083546 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.129890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.129936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.129948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.129965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.129978 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.232742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.232782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.232794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.232808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.232820 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.335295 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.335327 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.335336 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.335352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.335363 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.441362 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.441396 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.441408 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.441424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.441436 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.491237 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:14:44.549227149 +0000 UTC Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.543889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.543930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.543940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.543955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.543965 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.646425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.646460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.646469 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.646484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.646494 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.748526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.748885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.748935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.748959 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.748974 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.851887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.851948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.851962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.851982 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.851995 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.954644 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.954681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.954690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.954707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:39 crc kubenswrapper[4721]: I0128 18:34:39.954717 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:39Z","lastTransitionTime":"2026-01-28T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.057747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.057789 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.057798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.057814 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.057826 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.159824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.159864 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.159876 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.159892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.159904 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.261572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.261614 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.261627 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.261646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.261659 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.322230 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.322316 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322334 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:34:56.322311035 +0000 UTC m=+62.047616605 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.322363 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.322393 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322413 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322425 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322435 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322463 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322469 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322466 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:56.322459109 +0000 UTC m=+62.047764669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322489 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322503 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322505 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:56.322495061 +0000 UTC m=+62.047800621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.322423 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322535 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:56.322525131 +0000 UTC m=+62.047830691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322559 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.322605 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:34:56.322590714 +0000 UTC m=+62.047896274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.363745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.363785 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.363794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.363809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.363817 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.465892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.465926 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.465935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.465951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.465959 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.491895 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:30:40.913079971 +0000 UTC Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.528411 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.528446 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.528412 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.528549 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.528689 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:40 crc kubenswrapper[4721]: E0128 18:34:40.528769 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.529224 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.568026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.568107 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.568118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.568133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.568141 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.670299 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.670345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.670354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.670369 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.670379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.773090 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.773126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.773138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.773156 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.773185 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.876303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.876334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.876345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.876361 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.876371 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.978713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.978767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.978778 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.978798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:40 crc kubenswrapper[4721]: I0128 18:34:40.978812 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:40Z","lastTransitionTime":"2026-01-28T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.081290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.081335 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.081348 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.081368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.081379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.183776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.183818 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.183832 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.183849 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.183862 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.290032 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.290095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.290107 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.290128 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.290142 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.393162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.393216 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.393227 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.393242 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.393255 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.492336 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:31:37.942684115 +0000 UTC Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.495581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.495621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.495637 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.495660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.495671 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.599146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.599236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.599253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.599280 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.599298 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.701817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.701863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.701872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.701887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.701899 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.804038 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.804087 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.804100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.804121 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.804132 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.820763 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/0.log" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.824076 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69" exitCode=1 Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.824156 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.824957 4721 scope.go:117] "RemoveContainer" containerID="ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.825771 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.827591 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.827977 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.846085 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.858546 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.876539 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.893779 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.907036 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.907070 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.907079 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.907095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.907106 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:41Z","lastTransitionTime":"2026-01-28T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.914370 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.927959 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.942478 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.963339 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.977327 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:41 crc kubenswrapper[4721]: I0128 18:34:41.991679 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.004753 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.008864 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.008901 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.008911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.008928 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.008940 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.015124 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.026218 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.037615 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.049817 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.059362 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.073082 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.084633 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.095573 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.106232 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.110864 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.110929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.110942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.110960 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.110977 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.117538 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.127318 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.138053 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.153975 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.165992 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.178017 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.190362 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.208245 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.213066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.213104 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.213114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.213132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.213147 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.223610 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.246271 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.315468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.315500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.315509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.315522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.315533 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.417892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.417932 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.417945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.417961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.417971 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.493419 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:57:57.35832847 +0000 UTC Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.520543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.520591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.520600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.520615 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.520625 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.528115 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.528115 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:42 crc kubenswrapper[4721]: E0128 18:34:42.528401 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:42 crc kubenswrapper[4721]: E0128 18:34:42.528273 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.528130 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:42 crc kubenswrapper[4721]: E0128 18:34:42.528499 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.622668 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.622701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.622710 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.622724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.622734 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.725057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.725092 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.725100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.725113 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.725123 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.827718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.827764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.827775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.827792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.827804 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.831117 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/0.log" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.833658 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.844928 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.858335 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.869459 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.880612 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.897973 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.910378 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.926538 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.929977 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.930017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.930029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.930050 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.930061 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:42Z","lastTransitionTime":"2026-01-28T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.939712 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.952335 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.965953 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.974769 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.985480 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:42 crc kubenswrapper[4721]: I0128 18:34:42.996047 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.005512 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.017072 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.032674 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.032700 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.032708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.032721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.032729 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.135057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.135097 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.135113 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.135130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.135142 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.237393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.237433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.237444 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.237463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.237474 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.340360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.340410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.340424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.340442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.340456 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.424602 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8"] Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.424999 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.426713 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.427702 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.441461 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.442933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.442968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.442979 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.442994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.443004 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.450958 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.463943 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.478621 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.489907 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.494190 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:50:33.366851224 +0000 UTC Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.500319 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.510589 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.519805 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.528573 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.545238 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.545275 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.545286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.545301 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.545310 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.546580 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.553020 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqp9h\" (UniqueName: \"kubernetes.io/projected/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-kube-api-access-lqp9h\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.553060 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.553095 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.553123 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.557252 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.574937 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.587134 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.599577 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.611252 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.626266 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.647714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.647745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.647754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.647769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.647780 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.654256 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqp9h\" (UniqueName: \"kubernetes.io/projected/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-kube-api-access-lqp9h\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.654290 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.654314 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.654335 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.654979 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.655040 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.660130 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.669095 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqp9h\" (UniqueName: \"kubernetes.io/projected/8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88-kube-api-access-lqp9h\") pod \"ovnkube-control-plane-749d76644c-x8hw8\" (UID: \"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.739237 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.749764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.749800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.749813 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.749834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.749848 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.838276 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/1.log" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.838841 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/0.log" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.841368 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba" exitCode=1 Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.841389 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.841434 4721 scope.go:117] "RemoveContainer" containerID="ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.842204 4721 scope.go:117] "RemoveContainer" containerID="9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba" Jan 28 18:34:43 crc kubenswrapper[4721]: E0128 18:34:43.842372 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.851579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.851605 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.851616 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.851630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.851641 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.855344 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.866081 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: W0128 18:34:43.875879 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ac7e75e_c5bb_4b57_b2ba_9ebe8b8fbd88.slice/crio-b17639a403368a9ce90dd446bae79d710309dacb12b4852d540a7cbe20a5892e WatchSource:0}: Error finding container b17639a403368a9ce90dd446bae79d710309dacb12b4852d540a7cbe20a5892e: Status 404 returned error can't find the container with id b17639a403368a9ce90dd446bae79d710309dacb12b4852d540a7cbe20a5892e Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.881278 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.905506 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.920335 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.934057 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.948292 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.954646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.954718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.954819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.954834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.954843 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:43Z","lastTransitionTime":"2026-01-28T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.964011 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.983568 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:43 crc kubenswrapper[4721]: I0128 18:34:43.997976 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.007602 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.021441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.032998 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.043370 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.054453 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.057013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.057045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.057055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.057070 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.057080 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.064864 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.158617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.158645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.158653 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.158668 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.158677 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.261406 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.261760 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.261844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.261926 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.262000 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.365006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.365057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.365072 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.365091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.365103 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.466995 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.467118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.467131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.467228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.467307 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.494371 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jqvck"] Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.494849 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:44 crc kubenswrapper[4721]: E0128 18:34:44.494915 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.495210 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:37:31.122021087 +0000 UTC Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.508199 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.518630 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.528482 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.528522 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:44 crc kubenswrapper[4721]: E0128 18:34:44.528615 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:44 crc kubenswrapper[4721]: E0128 18:34:44.528660 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.528964 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:44 crc kubenswrapper[4721]: E0128 18:34:44.529186 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.534261 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.544538 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.555525 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.561049 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.561136 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96np9\" (UniqueName: \"kubernetes.io/projected/f3440038-c980-4fb4-be99-235515ec221c-kube-api-access-96np9\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.567095 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.569362 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.569402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.569410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.569424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.569433 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.578832 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.590317 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.602439 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.614739 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.628867 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.650547 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.661963 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.662049 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96np9\" (UniqueName: \"kubernetes.io/projected/f3440038-c980-4fb4-be99-235515ec221c-kube-api-access-96np9\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:44 crc kubenswrapper[4721]: E0128 18:34:44.662185 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:44 crc kubenswrapper[4721]: E0128 18:34:44.662296 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:34:45.16227813 +0000 UTC m=+50.887583690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.662878 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.671405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.671438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.671446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.671459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.671468 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.679903 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96np9\" (UniqueName: \"kubernetes.io/projected/f3440038-c980-4fb4-be99-235515ec221c-kube-api-access-96np9\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.681838 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.696536 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.708796 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.723819 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.774008 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.774057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.774071 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.774089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.774100 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.847761 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/1.log" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.851490 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" event={"ID":"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88","Type":"ContainerStarted","Data":"3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.851541 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" event={"ID":"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88","Type":"ContainerStarted","Data":"cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.851559 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" event={"ID":"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88","Type":"ContainerStarted","Data":"b17639a403368a9ce90dd446bae79d710309dacb12b4852d540a7cbe20a5892e"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.865083 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.875524 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.875807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.875896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.875905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.875919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.875929 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.889860 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.901275 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.913659 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.925695 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.936753 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.946162 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.956667 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.967551 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.978218 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.978260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.978272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.978291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.978301 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:44Z","lastTransitionTime":"2026-01-28T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:44 crc kubenswrapper[4721]: I0128 18:34:44.980650 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:44.999894 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.014281 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.026430 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.037872 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.051470 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.070490 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.080987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.081030 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.081042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.081058 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.081068 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.166719 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:45 crc kubenswrapper[4721]: E0128 18:34:45.166833 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:45 crc kubenswrapper[4721]: E0128 18:34:45.166988 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:34:46.16697178 +0000 UTC m=+51.892277340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.183149 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.183213 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.183225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.183241 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.183252 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.285835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.285874 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.285884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.285899 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.285910 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.387648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.387683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.387692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.387708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.387718 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.490457 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.490502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.490510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.490525 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.490534 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.495667 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:46:37.360200904 +0000 UTC Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.543132 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.553657 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.568420 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.581423 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.592978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.593032 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.593042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.593056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.593066 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.594011 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.605270 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.618261 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.628590 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.637695 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.648910 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.660609 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.673103 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.691567 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.695080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.695116 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.695130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.695146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.695157 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.703992 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.726786 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.740091 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.765633 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.796951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.796992 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.797005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.797021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.797032 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.898941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.898981 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.898992 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.899005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:45 crc kubenswrapper[4721]: I0128 18:34:45.899014 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:45Z","lastTransitionTime":"2026-01-28T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.001588 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.001934 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.001945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.001962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.001972 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.094494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.094539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.094548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.094563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.094572 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.107388 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.112259 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.112300 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.112308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.112322 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.112332 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.125351 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.129145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.129217 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.129232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.129251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.129263 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.141512 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.144810 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.145064 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.145193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.145294 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.145373 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.157483 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.160903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.160931 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.160939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.160953 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.160963 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.175197 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.175586 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.177371 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.177501 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.177556 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:34:48.177540158 +0000 UTC m=+53.902845718 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.177775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.177850 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.177868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.177892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.177909 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.280071 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.280115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.280124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.280138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.280147 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.382576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.382614 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.382623 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.382638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.382648 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.485153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.485273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.485295 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.485326 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.485349 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.495950 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:16:52.156772745 +0000 UTC Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.528053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.528190 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.528247 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.528286 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.528335 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.528385 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.529000 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:46 crc kubenswrapper[4721]: E0128 18:34:46.529125 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.588487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.588562 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.588572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.588587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.588596 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.690703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.690744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.690752 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.690766 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.690776 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.793493 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.793521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.793529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.793541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.793550 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.895990 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.896025 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.896036 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.896053 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.896064 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.998543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.998586 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.998596 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.998612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:46 crc kubenswrapper[4721]: I0128 18:34:46.998624 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:46Z","lastTransitionTime":"2026-01-28T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.101442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.101484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.101495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.101510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.101519 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.203721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.203767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.203777 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.203791 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.203800 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.306025 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.306073 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.306082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.306098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.306107 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.408394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.408434 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.408443 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.408462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.408481 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.496239 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:04:29.663797308 +0000 UTC Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.510723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.510763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.510771 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.510787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.510797 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.613690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.614024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.614125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.614247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.614356 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.717481 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.717516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.717528 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.717544 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.717555 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.819555 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.819594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.819602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.819614 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.819623 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.922489 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.922547 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.922558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.922575 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:47 crc kubenswrapper[4721]: I0128 18:34:47.922587 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:47Z","lastTransitionTime":"2026-01-28T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.025154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.025252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.025267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.025295 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.025311 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.127905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.127966 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.127976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.127996 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.128008 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.195986 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:48 crc kubenswrapper[4721]: E0128 18:34:48.196385 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:48 crc kubenswrapper[4721]: E0128 18:34:48.196456 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:34:52.196436632 +0000 UTC m=+57.921742192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.230926 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.230974 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.230992 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.231009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.231021 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.334045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.334100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.334118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.334142 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.334159 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.437117 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.437192 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.437208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.437228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.437240 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.497245 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:04:27.148410882 +0000 UTC Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.527868 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.527966 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.528016 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.527885 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:48 crc kubenswrapper[4721]: E0128 18:34:48.528107 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:48 crc kubenswrapper[4721]: E0128 18:34:48.528271 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:48 crc kubenswrapper[4721]: E0128 18:34:48.528347 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:48 crc kubenswrapper[4721]: E0128 18:34:48.528463 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.539521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.539592 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.539638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.539667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.539682 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.642565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.642621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.642633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.642654 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.642668 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.746289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.746330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.746339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.746369 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.746381 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.848384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.848435 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.848447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.848467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.848481 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.950852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.950964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.950976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.950994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:48 crc kubenswrapper[4721]: I0128 18:34:48.951006 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:48Z","lastTransitionTime":"2026-01-28T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.052893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.052955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.052965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.052983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.053028 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.155226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.155275 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.155286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.155306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.155316 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.258122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.258182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.258191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.258206 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.258216 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.360448 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.360504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.360517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.360533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.360548 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.463071 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.463122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.463132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.463152 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.463163 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.497417 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:54:11.156621753 +0000 UTC Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.565548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.565601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.565614 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.565633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.565652 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.661595 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.668492 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.668548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.668558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.668573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.668581 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.669405 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.677603 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.691685 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.709797 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.723770 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.734942 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.750782 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.763882 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.771109 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.771148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.771160 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.771195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.771207 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.781336 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.793590 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.812968 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.826355 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.837517 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.855983 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae0f678963d4efdfa09099257ad96a3ba4457e2819e237234b4137fab9b67f69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:40Z\\\",\\\"message\\\":\\\"nformers/factory.go:160\\\\nI0128 18:34:40.598846 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598844 6010 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.598886 6010 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598914 6010 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.598929 6010 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:34:40.599009 6010 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:34:40.599206 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.872259 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.873697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.873764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.873775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.873790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.873799 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.895007 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.908550 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.920959 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.976302 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.976362 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.976374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.976389 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:49 crc kubenswrapper[4721]: I0128 18:34:49.976399 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:49Z","lastTransitionTime":"2026-01-28T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.078998 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.079042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.079052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.079069 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.079079 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.181084 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.181127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.181136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.181150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.181163 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.283777 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.283814 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.283824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.283837 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.283848 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.386524 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.386569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.386578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.386594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.386607 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.489026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.489080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.489089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.489105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.489115 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.497669 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:37:54.508514791 +0000 UTC Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.528312 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.528378 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:50 crc kubenswrapper[4721]: E0128 18:34:50.528448 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.528502 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:50 crc kubenswrapper[4721]: E0128 18:34:50.528670 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.528701 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:50 crc kubenswrapper[4721]: E0128 18:34:50.528787 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:50 crc kubenswrapper[4721]: E0128 18:34:50.528878 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.591066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.591096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.591105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.591119 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.591127 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.693598 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.693638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.693647 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.693661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.693671 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.795689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.795738 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.795749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.795765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.795776 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.898061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.898091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.898101 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.898114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:50 crc kubenswrapper[4721]: I0128 18:34:50.898123 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:50Z","lastTransitionTime":"2026-01-28T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.000465 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.000523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.000534 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.000557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.000569 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.103072 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.103118 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.103130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.103147 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.103157 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.205664 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.205692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.205700 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.205713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.205722 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.308785 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.308835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.308844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.308858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.308870 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.410727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.410789 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.410802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.410822 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.410835 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.498322 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:00:07.369249548 +0000 UTC Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.513140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.513209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.513224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.513243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.513257 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.616658 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.616707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.616719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.616735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.616748 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.719057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.719102 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.719114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.719127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.719137 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.821673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.821707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.821715 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.821729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.821740 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.924745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.924793 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.924804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.924821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:51 crc kubenswrapper[4721]: I0128 18:34:51.924833 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:51Z","lastTransitionTime":"2026-01-28T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.027612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.027648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.027782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.027807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.028050 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.130384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.130417 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.130427 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.130440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.130449 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.232374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.232410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.232418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.232431 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.232440 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.238745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:52 crc kubenswrapper[4721]: E0128 18:34:52.238903 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:52 crc kubenswrapper[4721]: E0128 18:34:52.239004 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:35:00.238984602 +0000 UTC m=+65.964290152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.335114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.335153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.335165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.335212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.335223 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.437407 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.437461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.437474 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.437491 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.437502 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.498549 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:04:09.012483793 +0000 UTC Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.528534 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.528548 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.528565 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:52 crc kubenswrapper[4721]: E0128 18:34:52.528866 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:52 crc kubenswrapper[4721]: E0128 18:34:52.528742 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:52 crc kubenswrapper[4721]: E0128 18:34:52.528914 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.528639 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:52 crc kubenswrapper[4721]: E0128 18:34:52.528982 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.539519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.539570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.539585 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.539606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.539620 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.641746 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.642013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.642127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.642243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.642308 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.744244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.744283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.744294 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.744311 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.744324 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.846314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.846372 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.846384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.846403 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.846416 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.948820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.948884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.948895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.948919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:52 crc kubenswrapper[4721]: I0128 18:34:52.948937 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:52Z","lastTransitionTime":"2026-01-28T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.051291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.051356 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.051366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.051386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.051410 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.153527 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.153568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.153584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.153600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.153609 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.256031 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.256067 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.256078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.256092 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.256104 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.358515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.358556 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.358590 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.358610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.358620 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.460780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.460816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.460825 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.460856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.460866 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.499698 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:51:49.418986816 +0000 UTC Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.562890 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.562932 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.562942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.562954 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.562964 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.665413 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.665463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.665477 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.665495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.665505 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.767696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.767735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.767744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.767759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.767769 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.869983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.870065 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.870090 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.870121 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.870147 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.972799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.972841 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.972852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.972869 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:53 crc kubenswrapper[4721]: I0128 18:34:53.972879 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:53Z","lastTransitionTime":"2026-01-28T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.075387 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.075435 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.075444 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.075460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.075471 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.177905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.177951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.177962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.177979 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.177992 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.280804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.280856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.280867 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.280883 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.280894 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.382539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.382572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.382584 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.382599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.382611 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.484779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.484820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.484828 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.484843 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.484853 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.500258 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:15:05.259959994 +0000 UTC Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.528456 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.528550 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.528678 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.528684 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:54 crc kubenswrapper[4721]: E0128 18:34:54.528685 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:54 crc kubenswrapper[4721]: E0128 18:34:54.529923 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:54 crc kubenswrapper[4721]: E0128 18:34:54.530356 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:54 crc kubenswrapper[4721]: E0128 18:34:54.530399 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.531254 4721 scope.go:117] "RemoveContainer" containerID="9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.544551 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.559205 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.571138 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.582734 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.587209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.587241 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.587250 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.587264 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.587273 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.594862 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.605071 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.615039 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.624395 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.636008 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.648771 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.660937 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.674129 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.689985 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.690040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.690052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.690068 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.690098 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.697027 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.710058 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.730066 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.741072 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.754699 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.768099 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.792506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.792564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.792576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.792593 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.792604 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.883429 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/1.log" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.885524 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.885911 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.895110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.895184 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.895199 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.895217 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.895228 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.899219 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.909906 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.925792 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.935815 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.947425 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.959419 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.972297 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.985680 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.996389 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.997730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.997766 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.997778 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.997795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:54 crc kubenswrapper[4721]: I0128 18:34:54.997807 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:54Z","lastTransitionTime":"2026-01-28T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.007260 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.021559 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.034989 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.054624 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.069454 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.083735 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.099643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.099681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.099691 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.099708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.099717 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.100658 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.113319 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.132637 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.201680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.201711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.201719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.201733 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.201743 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.304619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.304655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.304665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.304680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.304692 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.407027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.407311 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.407418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.407502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.407613 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.500832 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:22:47.674308049 +0000 UTC Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.509233 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.509276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.509289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.509307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.509318 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.543689 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.554081 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.568844 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.580674 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.594113 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.606463 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.611354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.611392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.611410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.611427 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.611439 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.619264 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.630637 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.640786 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.654319 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.666546 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.688858 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.702778 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.713598 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.713624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.713632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.713646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.713655 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.714441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.726348 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.736253 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.754281 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.765656 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.816122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.816214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.816227 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.816244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.816255 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.890131 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/2.log" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.890763 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/1.log" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.893537 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6" exitCode=1 Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.893580 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.893622 4721 scope.go:117] "RemoveContainer" containerID="9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.894281 4721 scope.go:117] "RemoveContainer" containerID="932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6" Jan 28 18:34:55 crc kubenswrapper[4721]: E0128 18:34:55.894426 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.906887 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.918110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.918189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.918202 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.918219 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.918230 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:55Z","lastTransitionTime":"2026-01-28T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.919004 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.929142 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.941457 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.960101 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.977075 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.989608 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:55 crc kubenswrapper[4721]: I0128 18:34:55.999233 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.009643 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.020330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.020364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.020374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.020390 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.020400 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.026246 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e167117f8dd91a71f9983b9f3516e8162cf03f390ef2a1a8478fd5dd6df2dba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:42Z\\\",\\\"message\\\":\\\"4:42.840611 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-76rx2 after 0 failed attempt(s)\\\\nI0128 18:34:42.840616 6198 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-76rx2\\\\nI0128 18:34:42.840504 6198 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0128 18:34:42.840624 6198 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0128 18:34:42.840480 6198 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840632 6198 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0128 18:34:42.840636 6198 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0128 18:34:42.840640 6198 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.038532 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.047719 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.060569 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.071280 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.082552 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.094941 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.106877 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.119445 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.122033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.122073 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.122085 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.122100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.122111 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.224824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.224881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.224893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.224911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.224921 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.327855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.327909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.327921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.327941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.327953 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.375905 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.376065 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.376111 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.376205 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.376239 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376283 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:35:28.376248601 +0000 UTC m=+94.101554201 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376335 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376356 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376403 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:35:28.376388306 +0000 UTC m=+94.101693866 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376403 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376430 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376433 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376459 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:35:28.376451138 +0000 UTC m=+94.101756808 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376311 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376492 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376501 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376514 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:35:28.376495209 +0000 UTC m=+94.101800769 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.376531 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:35:28.37652302 +0000 UTC m=+94.101828730 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.396404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.396440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.396453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.396472 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.396483 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.409398 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.412468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.412504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.412515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.412530 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.412540 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.423921 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.427880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.427949 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.427964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.427983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.427994 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.440040 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.443385 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.443433 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.443442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.443460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.443472 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.454301 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.457938 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.458001 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.458012 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.458027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.458037 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.471207 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.471370 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.472818 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.472868 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.472879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.472894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.472923 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.501500 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:27:16.021235376 +0000 UTC Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.528007 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.528053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.528063 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.528034 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.528134 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.528269 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.528346 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.528425 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.575091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.575125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.575133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.575145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.575154 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.677673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.677718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.677732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.677749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.677762 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.780022 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.780069 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.780079 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.780099 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.780111 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.882237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.882275 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.882292 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.882312 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.882332 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.899487 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/2.log" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.902386 4721 scope.go:117] "RemoveContainer" containerID="932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6" Jan 28 18:34:56 crc kubenswrapper[4721]: E0128 18:34:56.902534 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.925055 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.939382 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.959755 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.974664 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.985014 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.985115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.985464 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.985655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.985672 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:56Z","lastTransitionTime":"2026-01-28T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:56 crc kubenswrapper[4721]: I0128 18:34:56.988393 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.001007 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.013318 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.025768 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.035421 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.048587 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.069923 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.084222 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.087541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.087587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.087600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.087619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.087630 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.096913 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.108088 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.119374 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.132831 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.146748 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.158449 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.190905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.190962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.190971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.190993 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.191004 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.293345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.293388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.293403 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.293425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.293438 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.395360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.395406 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.395419 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.395435 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.395445 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.497877 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.498250 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.498262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.498278 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.498288 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.502097 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:09:42.540242579 +0000 UTC Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.601081 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.601162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.601195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.601212 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.601222 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.703870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.703917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.703927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.703944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.703955 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.805695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.805754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.805770 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.805789 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.805802 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.908892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.908948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.908961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.908978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.908990 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:57Z","lastTransitionTime":"2026-01-28T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.976506 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:34:57 crc kubenswrapper[4721]: I0128 18:34:57.996694 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.010930 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.012356 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.012414 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.012425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.012439 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.012448 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.032261 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.046790 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.062306 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.076042 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.088378 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.101559 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.111515 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.115101 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.115134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.115145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.115236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.115482 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.126606 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.138386 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.153020 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.166067 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.177825 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.191756 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.203588 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.216532 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.217501 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.217537 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.217549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.217568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.217580 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.229560 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:34:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.320374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.320401 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.320409 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.320421 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.320430 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.423080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.423115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.423126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.423141 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.423155 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.502774 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:45:57.848526505 +0000 UTC Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.525730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.525778 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.525788 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.525805 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.525819 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.528341 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.528391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.528391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.528466 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:34:58 crc kubenswrapper[4721]: E0128 18:34:58.528555 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:34:58 crc kubenswrapper[4721]: E0128 18:34:58.528617 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:34:58 crc kubenswrapper[4721]: E0128 18:34:58.528708 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:34:58 crc kubenswrapper[4721]: E0128 18:34:58.528754 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.628077 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.628115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.628125 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.628138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.628147 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.730617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.730651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.730660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.730673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.730683 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.833410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.833445 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.833455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.833473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.833486 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.936140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.936199 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.936209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.936224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:58 crc kubenswrapper[4721]: I0128 18:34:58.936270 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:58Z","lastTransitionTime":"2026-01-28T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.038851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.038892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.038903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.038918 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.038932 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.141782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.141821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.141830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.141845 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.141854 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.244634 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.244666 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.244677 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.244693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.244703 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.346854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.346892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.346902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.346916 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.346936 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.449329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.449384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.449396 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.449416 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.449429 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.503230 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:22:20.127626074 +0000 UTC Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.551852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.551929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.551944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.551962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.551974 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.654252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.654288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.654296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.654316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.654329 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.756379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.756418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.756429 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.756444 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.756455 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.858966 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.859000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.859011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.859027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.859038 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.961284 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.961315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.961323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.961337 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:34:59 crc kubenswrapper[4721]: I0128 18:34:59.961347 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:34:59Z","lastTransitionTime":"2026-01-28T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.063940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.063975 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.063983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.063997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.064006 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.165804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.165842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.165852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.165867 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.165878 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.268602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.268632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.268643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.268660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.268672 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.314640 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:00 crc kubenswrapper[4721]: E0128 18:35:00.314832 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:35:00 crc kubenswrapper[4721]: E0128 18:35:00.314894 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:35:16.31487599 +0000 UTC m=+82.040181550 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.370740 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.370780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.370790 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.370803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.370812 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.473082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.473115 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.473126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.473140 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.473149 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.503844 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:58:35.56053643 +0000 UTC Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.528494 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.528559 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:00 crc kubenswrapper[4721]: E0128 18:35:00.528623 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.528502 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.528516 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:00 crc kubenswrapper[4721]: E0128 18:35:00.528701 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:00 crc kubenswrapper[4721]: E0128 18:35:00.528866 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:00 crc kubenswrapper[4721]: E0128 18:35:00.528886 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.575816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.575875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.575884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.575900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.575912 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.678060 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.678109 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.678119 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.678136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.678150 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.780565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.780590 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.780600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.780612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.780621 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.882988 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.883044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.883057 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.883078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.883091 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.984933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.984999 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.985008 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.985023 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:00 crc kubenswrapper[4721]: I0128 18:35:00.985032 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:00Z","lastTransitionTime":"2026-01-28T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.087136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.087188 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.087200 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.087217 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.087227 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.190152 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.190220 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.190234 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.190251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.190264 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.292939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.292981 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.292994 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.293016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.293027 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.395205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.395240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.395249 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.395265 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.395286 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.497718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.497757 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.497768 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.497782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.497791 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.504978 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:48:54.644443192 +0000 UTC Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.600334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.600376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.600386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.600401 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.600413 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.702624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.702669 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.702680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.702696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.702708 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.805819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.805867 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.805879 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.805899 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.805910 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.908610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.908649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.908657 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.908671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:01 crc kubenswrapper[4721]: I0128 18:35:01.908681 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:01Z","lastTransitionTime":"2026-01-28T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.010634 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.010673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.010682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.010697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.010709 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.112970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.113014 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.113026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.113043 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.113052 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.215603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.215639 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.215647 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.215662 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.215671 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.318003 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.318041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.318055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.318073 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.318084 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.420854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.420900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.420912 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.420930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.420943 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.505900 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:33:47.606593056 +0000 UTC Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.523812 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.523856 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.523865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.523880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.523891 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.527998 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.528032 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.528079 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:02 crc kubenswrapper[4721]: E0128 18:35:02.528200 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.528255 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:02 crc kubenswrapper[4721]: E0128 18:35:02.528296 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:02 crc kubenswrapper[4721]: E0128 18:35:02.528367 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:02 crc kubenswrapper[4721]: E0128 18:35:02.528453 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.627180 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.627214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.627222 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.627236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.627246 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.728997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.729034 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.729044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.729065 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.729093 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.830826 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.830857 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.830866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.830880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.830891 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.933350 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.933376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.933385 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.933398 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:02 crc kubenswrapper[4721]: I0128 18:35:02.933407 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:02Z","lastTransitionTime":"2026-01-28T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.035547 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.035583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.035592 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.035609 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.035619 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.138026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.138071 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.138081 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.138100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.138113 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.240501 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.240541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.240550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.240565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.240576 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.343336 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.343376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.343391 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.343408 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.343419 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.446204 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.446253 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.446265 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.446284 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.446297 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.506633 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:55:09.603087238 +0000 UTC Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.548815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.548860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.548870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.548888 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.548901 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.651577 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.651623 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.651633 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.651648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.651658 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.753679 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.753719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.753730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.753748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.753760 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.856267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.856323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.856333 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.856349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.856360 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.958425 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.958455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.958468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.958483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:03 crc kubenswrapper[4721]: I0128 18:35:03.958494 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:03Z","lastTransitionTime":"2026-01-28T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.060281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.060327 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.060356 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.060381 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.060395 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.163451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.163515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.163527 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.163543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.163558 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.267013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.267054 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.267066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.267083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.267095 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.369629 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.369709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.369722 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.369741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.369754 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.473807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.473875 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.473886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.473910 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.473927 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.507232 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:13:21.308224241 +0000 UTC Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.527656 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.527688 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.527731 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.527703 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:04 crc kubenswrapper[4721]: E0128 18:35:04.527854 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:04 crc kubenswrapper[4721]: E0128 18:35:04.528089 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:04 crc kubenswrapper[4721]: E0128 18:35:04.528196 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:04 crc kubenswrapper[4721]: E0128 18:35:04.528274 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.576328 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.576369 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.576379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.576393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.576403 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.681021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.681055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.681068 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.681086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.681099 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.783510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.783559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.783571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.783590 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.783602 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.886463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.886504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.886515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.886529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.886539 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.988884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.988940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.988952 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.988973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:04 crc kubenswrapper[4721]: I0128 18:35:04.988987 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:04Z","lastTransitionTime":"2026-01-28T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.091379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.091428 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.091438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.091453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.091463 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.193539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.193571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.193581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.193596 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.193605 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.295830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.295874 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.295886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.295922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.295935 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.398245 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.398297 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.398962 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.398981 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.398991 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.501629 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.501692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.501703 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.501721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.501776 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.508062 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:35:14.820889866 +0000 UTC Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.540029 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.551441 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.562100 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.583892 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.601947 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.603894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.603955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.603968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.604006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.604020 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.617494 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.631792 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.644832 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.664392 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.679447 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.695401 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.706296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.706346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.706360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.706379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.706392 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.707825 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.724252 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.736457 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.751635 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.764657 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.775974 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.789550 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:05Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.808200 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.808247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.808258 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.808274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.808284 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.911729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.911769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.911780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.911796 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:05 crc kubenswrapper[4721]: I0128 18:35:05.911807 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:05Z","lastTransitionTime":"2026-01-28T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.013948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.013996 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.014007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.014024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.014038 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.116320 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.116362 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.116371 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.116387 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.116400 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.218498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.218557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.218569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.218588 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.218599 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.321576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.321616 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.321626 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.321641 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.321651 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.423683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.423727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.423736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.423751 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.423762 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.492128 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.492163 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.492189 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.492207 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.492217 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.503611 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:06Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.507460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.507506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.507519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.507540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.507553 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.508514 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:05:38.83556082 +0000 UTC Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.521314 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:06Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.524376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.524404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.524412 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.524426 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.524435 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.527781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.527808 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.527781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.527866 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.527960 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.528028 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.528070 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.528127 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.535432 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:06Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.538603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.538626 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.538635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.538648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.538659 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.551976 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:06Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.556555 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.556624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.556638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.556654 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.556663 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.570157 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:06Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:06 crc kubenswrapper[4721]: E0128 18:35:06.570358 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.571978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.572011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.572023 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.572044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.572056 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.674399 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.674451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.674462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.674486 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.674498 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.776758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.776798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.776811 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.776829 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.776871 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.879243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.879286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.879298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.879315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.879326 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.981725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.981773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.981781 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.981794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:06 crc kubenswrapper[4721]: I0128 18:35:06.981804 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:06Z","lastTransitionTime":"2026-01-28T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.084446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.084483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.084492 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.084508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.084518 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.187025 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.187063 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.187080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.187096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.187105 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.289796 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.289839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.289848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.289863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.289872 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.392249 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.392276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.392285 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.392298 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.392306 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.494799 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.494834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.494845 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.494862 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.494871 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.509148 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:43:43.165628352 +0000 UTC Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.597686 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.597726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.597736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.597754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.597765 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.699696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.699735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.699746 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.699763 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.699772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.802579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.802621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.802632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.802652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.802664 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.905340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.905383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.905394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.905409 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:07 crc kubenswrapper[4721]: I0128 18:35:07.905420 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:07Z","lastTransitionTime":"2026-01-28T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.007525 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.007571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.007583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.007601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.007634 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.109696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.109748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.109761 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.109781 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.109793 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.213610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.213708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.213723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.213756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.213776 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.315817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.315871 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.315884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.315904 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.315920 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.418261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.418313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.418324 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.418339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.418351 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.510013 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:59:57.779163394 +0000 UTC Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.520980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.521020 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.521033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.521051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.521062 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.528267 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.528286 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.528294 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:08 crc kubenswrapper[4721]: E0128 18:35:08.528374 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.528268 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:08 crc kubenswrapper[4721]: E0128 18:35:08.528441 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:08 crc kubenswrapper[4721]: E0128 18:35:08.528511 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:08 crc kubenswrapper[4721]: E0128 18:35:08.528583 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.622929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.622968 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.622976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.622991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.623000 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.725686 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.725724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.725732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.725745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.725754 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.827948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.827987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.827996 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.828011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.828024 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.930451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.930499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.930508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.930522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:08 crc kubenswrapper[4721]: I0128 18:35:08.930532 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:08Z","lastTransitionTime":"2026-01-28T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.032776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.032830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.032842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.032860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.032872 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.134861 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.134897 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.134909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.134924 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.134933 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.237651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.237701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.237713 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.237732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.237744 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.339926 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.339983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.339995 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.340013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.340025 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.441907 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.441943 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.441954 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.441970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.441980 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.510703 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:25:18.740767815 +0000 UTC Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.528729 4721 scope.go:117] "RemoveContainer" containerID="932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6" Jan 28 18:35:09 crc kubenswrapper[4721]: E0128 18:35:09.528945 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.544360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.544402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.544416 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.544476 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.544491 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.646340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.646386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.646397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.646414 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.646425 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.750043 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.750094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.750105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.750124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.750135 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.852979 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.853056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.853078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.853106 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.853120 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.955308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.955346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.955358 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.955375 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:09 crc kubenswrapper[4721]: I0128 18:35:09.955386 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:09Z","lastTransitionTime":"2026-01-28T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.057667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.057726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.057742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.057759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.057770 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.160330 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.160378 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.160388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.160402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.160412 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.264476 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.264537 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.264548 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.264569 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.264585 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.367379 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.367422 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.367431 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.367448 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.367457 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.470471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.470508 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.470517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.470533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.470543 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.510960 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:42:00.829967957 +0000 UTC Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.528261 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.528298 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.528275 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.528259 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:10 crc kubenswrapper[4721]: E0128 18:35:10.528394 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:10 crc kubenswrapper[4721]: E0128 18:35:10.528558 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:10 crc kubenswrapper[4721]: E0128 18:35:10.528596 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:10 crc kubenswrapper[4721]: E0128 18:35:10.528631 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.572479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.572510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.572519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.572531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.572540 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.674709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.674743 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.674756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.674772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.674782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.776664 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.776719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.776733 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.776750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.776763 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.878702 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.878749 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.878759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.878775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.878786 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.980987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.981018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.981026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.981041 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:10 crc kubenswrapper[4721]: I0128 18:35:10.981052 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:10Z","lastTransitionTime":"2026-01-28T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.083794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.083838 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.083849 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.083866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.083879 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.185936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.186005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.186013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.186030 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.186040 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.289549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.289617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.289635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.289662 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.289676 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.392345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.392416 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.392427 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.392446 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.392460 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.496364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.496412 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.496422 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.496453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.496465 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.511818 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:26:36.658209175 +0000 UTC Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.598989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.599027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.599035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.599048 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.599072 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.701789 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.701828 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.701839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.701854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.701867 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.803998 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.804071 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.804086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.804102 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.804112 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.906753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.906791 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.906800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.906815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:11 crc kubenswrapper[4721]: I0128 18:35:11.906826 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:11Z","lastTransitionTime":"2026-01-28T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.009096 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.009133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.009145 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.009161 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.009186 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.111272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.111313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.111325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.111341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.111352 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.213842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.213880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.213891 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.213916 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.213928 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.317401 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.317459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.317470 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.317490 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.317505 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.422430 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.422491 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.422513 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.422541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.422561 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.512267 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:23:44.099538369 +0000 UTC Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.525550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.525603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.525611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.525626 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.525637 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.527782 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.527845 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.527910 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:12 crc kubenswrapper[4721]: E0128 18:35:12.527909 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.527978 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:12 crc kubenswrapper[4721]: E0128 18:35:12.528020 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:12 crc kubenswrapper[4721]: E0128 18:35:12.527980 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:12 crc kubenswrapper[4721]: E0128 18:35:12.528097 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.628164 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.628239 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.628250 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.628267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.628280 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.730494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.730530 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.730542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.730558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.730569 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.832854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.832909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.832922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.832944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.832957 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.935672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.935716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.935727 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.935748 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:12 crc kubenswrapper[4721]: I0128 18:35:12.935761 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:12Z","lastTransitionTime":"2026-01-28T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.038066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.038121 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.038130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.038157 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.038196 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.140414 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.140456 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.140488 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.140506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.140517 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.242447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.242514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.242524 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.242540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.242552 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.344726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.344759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.344767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.344813 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.344823 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.447159 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.447214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.447222 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.447235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.447244 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.513233 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:29:50.632129925 +0000 UTC Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.549819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.549872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.549881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.549894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.549904 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.652094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.652136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.652150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.652182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.652192 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.756073 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.756133 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.756146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.756202 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.756218 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.858283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.858364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.858383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.858413 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.858433 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.961765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.961848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.961870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.961911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:13 crc kubenswrapper[4721]: I0128 18:35:13.961933 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:13Z","lastTransitionTime":"2026-01-28T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.066265 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.066334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.066354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.066383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.066402 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.169325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.169360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.169371 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.169384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.169392 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.271764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.271803 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.271814 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.271831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.271842 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.374037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.374075 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.374087 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.374103 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.374115 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.477082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.477137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.477152 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.477187 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.477199 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.513385 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:07:26.711911339 +0000 UTC Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.528725 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.528791 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.528773 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.528805 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:14 crc kubenswrapper[4721]: E0128 18:35:14.528968 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:14 crc kubenswrapper[4721]: E0128 18:35:14.529219 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:14 crc kubenswrapper[4721]: E0128 18:35:14.529388 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:14 crc kubenswrapper[4721]: E0128 18:35:14.529859 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.539542 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.579334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.579366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.579374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.579386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.579396 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.681872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.681921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.681935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.681955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.681972 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.784565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.784626 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.784638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.784661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.784674 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.886992 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.887048 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.887062 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.887082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.887094 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.989942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.989980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.989991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.990005 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:14 crc kubenswrapper[4721]: I0128 18:35:14.990015 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:14Z","lastTransitionTime":"2026-01-28T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.092658 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.092737 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.092750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.092769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.092781 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.195941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.195984 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.195993 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.196008 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.196017 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.298209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.298243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.298251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.298266 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.298275 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.401204 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.401242 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.401254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.401272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.401282 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.503839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.503885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.503896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.503913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.503925 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.514226 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:48:11.994986044 +0000 UTC Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.543049 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.555787 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.569799 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.580894 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.591906 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.602844 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.607620 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.607653 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.607662 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.607679 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.607695 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.618786 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.633102 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.643527 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.654800 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.665363 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.675645 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.687016 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.698678 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.712324 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.712370 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.712382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.712404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.712418 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.720192 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.735026 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.755311 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.773317 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.787208 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.816052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.816114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.816124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.816146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.816161 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.918776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.918822 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.918833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.918849 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:15 crc kubenswrapper[4721]: I0128 18:35:15.918861 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:15Z","lastTransitionTime":"2026-01-28T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.021517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.021560 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.021570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.021590 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.021601 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.124059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.124532 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.124650 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.124764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.124854 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.228627 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.228707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.228723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.228773 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.228789 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.332436 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.332482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.332532 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.332550 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.332559 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.396286 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.396503 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.396685 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:35:48.396664849 +0000 UTC m=+114.121970409 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.434367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.434401 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.434410 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.434423 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.434432 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.514480 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:37:20.227907156 +0000 UTC Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.527892 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.528061 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.528345 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.528429 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.528352 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.528345 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.528487 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.528553 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.536111 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.536158 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.536187 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.536206 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.536219 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.638886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.638919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.638929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.638944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.638952 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.668620 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.668671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.668684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.668705 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.668718 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.680038 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:16Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.683699 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.683741 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.683753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.683771 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.683782 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.698267 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:16Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.702329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.702385 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.702398 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.702421 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.702433 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.716902 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:16Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.721583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.721824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.721927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.721997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.722073 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.733666 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:16Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.737944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.737980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.737993 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.738010 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.738022 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.755692 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:16Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:16 crc kubenswrapper[4721]: E0128 18:35:16.756372 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.758419 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.758509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.758579 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.758604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.758618 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.861804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.861855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.861867 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.861893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.861906 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.965199 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.965472 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.965571 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.965680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:16 crc kubenswrapper[4721]: I0128 18:35:16.965772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:16Z","lastTransitionTime":"2026-01-28T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.068116 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.068195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.068211 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.068236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.068253 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.170482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.170516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.170526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.170543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.170559 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.273959 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.274013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.274027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.274047 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.274059 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.376747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.376782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.376792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.376811 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.376824 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.479802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.479841 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.479852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.479870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.479881 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.515836 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:10:43.515657491 +0000 UTC Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.582673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.582721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.582732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.582750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.582764 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.685191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.685231 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.685240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.685258 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.685269 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.788642 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.788679 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.788689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.788705 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.788716 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.891531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.891600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.891619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.891654 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.891674 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.994991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.995072 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.995086 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.995112 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:17 crc kubenswrapper[4721]: I0128 18:35:17.995132 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:17Z","lastTransitionTime":"2026-01-28T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.097873 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.097921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.097933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.097950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.097960 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.201576 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.201653 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.201672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.201697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.201712 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.304412 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.304461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.304472 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.304492 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.304504 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.406880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.406934 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.406945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.406967 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.406989 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.510326 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.510384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.510397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.510419 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.510431 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.515986 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:11:35.774652866 +0000 UTC Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.528699 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.528788 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.528807 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:18 crc kubenswrapper[4721]: E0128 18:35:18.529515 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:18 crc kubenswrapper[4721]: E0128 18:35:18.529479 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.528965 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:18 crc kubenswrapper[4721]: E0128 18:35:18.529756 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:18 crc kubenswrapper[4721]: E0128 18:35:18.529898 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.613213 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.613273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.613284 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.613306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.613321 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.716600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.716643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.716655 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.716673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.716683 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.819784 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.819842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.819850 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.819866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.819897 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.923646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.923744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.923776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.923815 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.923846 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:18Z","lastTransitionTime":"2026-01-28T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.968837 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/0.log" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.968939 4721 generic.go:334] "Generic (PLEG): container finished" podID="c0a22020-3f34-4895-beec-2ed5d829ea79" containerID="9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a" exitCode=1 Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.969012 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerDied","Data":"9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a"} Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.969900 4721 scope.go:117] "RemoveContainer" containerID="9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a" Jan 28 18:35:18 crc kubenswrapper[4721]: I0128 18:35:18.984663 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:18Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.001743 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:18Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.018185 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.026391 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.026441 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.026459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.026484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.026498 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.033472 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.050089 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.063288 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.076990 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.093312 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.116684 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.129559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.129612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.129625 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.129651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.129665 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.132851 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.154748 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.168810 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.191744 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.209142 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.228205 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.232440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.232474 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.232482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.232505 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.232517 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.245960 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.259929 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.274256 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.290018 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.335271 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.335308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.335319 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.335335 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.335347 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.437318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.437349 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.437357 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.437370 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.437379 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.517396 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:35:43.169580635 +0000 UTC Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.539851 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.539911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.539932 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.539956 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.539968 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.642052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.642098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.642110 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.642127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.642139 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.744688 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.744738 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.744750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.744768 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.744779 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.847558 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.847622 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.847637 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.847661 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.847674 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.950431 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.950473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.950483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.950499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.950508 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:19Z","lastTransitionTime":"2026-01-28T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.975389 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/0.log" Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.975451 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerStarted","Data":"2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab"} Jan 28 18:35:19 crc kubenswrapper[4721]: I0128 18:35:19.992486 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:19Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.007321 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.027016 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.041953 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.053857 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.053901 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.053911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.053930 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.053940 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.057423 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.072075 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.087209 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.102473 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.117077 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.153685 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.156864 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.157260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.157357 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.157466 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.157547 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.168587 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.178938 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.197604 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.212041 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.232358 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.249361 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.260087 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.260131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.260144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.260162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.260192 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.263422 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.279313 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.295387 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:20Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.363143 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.363699 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.363837 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.363950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.364041 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.467314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.467355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.467367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.467385 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.467397 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.518502 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:32:51.311668761 +0000 UTC Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.528031 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.528083 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.528110 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:20 crc kubenswrapper[4721]: E0128 18:35:20.528235 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.528352 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:20 crc kubenswrapper[4721]: E0128 18:35:20.528450 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:20 crc kubenswrapper[4721]: E0128 18:35:20.528688 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:20 crc kubenswrapper[4721]: E0128 18:35:20.528891 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.529917 4721 scope.go:117] "RemoveContainer" containerID="932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.571376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.571925 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.571940 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.571964 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.571976 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.675711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.675756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.675765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.675782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.675795 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.778509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.778564 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.778578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.778599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.778613 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.882026 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.882089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.882108 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.882130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.882147 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.982506 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/2.log" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.983978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.984012 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.984021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.984038 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.984047 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:20Z","lastTransitionTime":"2026-01-28T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.986629 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} Jan 28 18:35:20 crc kubenswrapper[4721]: I0128 18:35:20.987036 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.005354 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.015503 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.030655 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.044942 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.059644 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.075345 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.086967 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.087024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.087040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.087068 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.087086 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.092009 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.108721 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.121877 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.137001 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.151037 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.168073 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.189959 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.190078 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.190108 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.190119 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.190132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.190142 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.206642 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.222993 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.240372 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.256410 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.278253 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.292106 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.292153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.292162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.292200 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.292212 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.295735 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.395144 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.395257 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.395273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.395295 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.395306 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.497307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.497350 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.497361 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.497378 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.497391 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.520339 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:18:55.790417401 +0000 UTC Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.600643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.600739 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.600759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.600792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.600808 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.702870 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.702919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.702933 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.702951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.702963 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.806533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.806602 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.806613 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.806671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.806685 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.908996 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.909042 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.909055 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.909072 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.909085 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:21Z","lastTransitionTime":"2026-01-28T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.992470 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/3.log" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.993131 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/2.log" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.996312 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" exitCode=1 Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.996358 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.996395 4721 scope.go:117] "RemoveContainer" containerID="932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6" Jan 28 18:35:21 crc kubenswrapper[4721]: I0128 18:35:21.997212 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:35:21 crc kubenswrapper[4721]: E0128 18:35:21.997425 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.012452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.012488 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.012497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.012511 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.012521 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.013855 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.030962 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.046030 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.058287 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.071692 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.084874 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.098055 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.114116 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.115860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.115915 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.115927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.115949 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.115963 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.129313 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.156676 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.189536 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.214589 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.218697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.218753 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.218764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.218786 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.218799 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.233764 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.250154 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.269966 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932e160b9acb81fd545498d2b471f3ae2cec8716bfa875350287f72b78516dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:34:55Z\\\",\\\"message\\\":\\\"n.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4c1be812-05d3-4f45-91b5-a853a5c8de71}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 18:34:55.473455 6405 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-7vsph after 0 failed attempt(s)\\\\nI0128 18:34:55.474632 6405 default_network_controller.go:776] Recording success event on pod openshift-multus/m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:21Z\\\",\\\"message\\\":\\\"nt handler 7\\\\nI0128 18:35:21.396048 6815 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:35:21.396110 6815 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:35:21.396136 6815 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:35:21.396134 6815 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:35:21.396466 6815 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396513 6815 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:35:21.396589 6815 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0128 18:35:21.396637 6815 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:35:21.396647 6815 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:35:21.396716 6815 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:35:21.396730 6815 factory.go:656] Stopping watch factory\\\\nI0128 18:35:21.396750 6815 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:35:21.396718 6815 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396785 6815 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:35:21.396796 6815 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:35:21.397012 6815 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.287396 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.302373 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.314704 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.322467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.322660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.322725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.322806 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.322876 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.331666 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:22Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.426691 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.427097 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.427221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.427397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.427527 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.520784 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:07:44.772032808 +0000 UTC Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.528097 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.528291 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.528356 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.528557 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:22 crc kubenswrapper[4721]: E0128 18:35:22.528564 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:22 crc kubenswrapper[4721]: E0128 18:35:22.528805 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:22 crc kubenswrapper[4721]: E0128 18:35:22.528878 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:22 crc kubenswrapper[4721]: E0128 18:35:22.528950 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.530021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.530068 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.530082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.530098 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.530113 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.633230 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.633281 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.633291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.633311 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.633327 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.736643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.737147 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.737275 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.737356 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.737442 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.840916 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.841540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.841645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.842034 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.842117 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.945309 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.945772 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.945880 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.946021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:22 crc kubenswrapper[4721]: I0128 18:35:22.946253 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:22Z","lastTransitionTime":"2026-01-28T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.004247 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/3.log" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.009141 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:35:23 crc kubenswrapper[4721]: E0128 18:35:23.009356 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.029936 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.047800 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.048987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.049146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.049258 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.049363 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.049468 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.062951 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.078189 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.092603 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.107871 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.123869 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.143090 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.153341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.153381 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.153395 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.153422 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.153437 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.160056 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.173681 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.188540 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.202768 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.227077 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.240419 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.254286 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.256288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.256436 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.256500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.256578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.256642 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.277447 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:21Z\\\",\\\"message\\\":\\\"nt handler 7\\\\nI0128 18:35:21.396048 6815 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:35:21.396110 6815 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:35:21.396136 6815 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:35:21.396134 6815 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:35:21.396466 6815 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396513 6815 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:35:21.396589 6815 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0128 18:35:21.396637 6815 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:35:21.396647 6815 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:35:21.396716 6815 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:35:21.396730 6815 factory.go:656] Stopping watch factory\\\\nI0128 18:35:21.396750 6815 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:35:21.396718 6815 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396785 6815 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:35:21.396796 6815 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:35:21.397012 6815 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.296001 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.319949 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.343478 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:23Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.359916 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.359971 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.359983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.360006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.360022 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.463209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.463267 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.463282 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.463310 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.463326 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.521546 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:01:01.570480062 +0000 UTC Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.566814 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.566881 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.566894 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.566913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.566926 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.670512 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.670585 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.670604 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.670632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.670651 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.774009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.774061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.774075 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.774091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.774102 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.877095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.877185 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.877199 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.877221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.877238 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.979663 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.979714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.979725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.979742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:23 crc kubenswrapper[4721]: I0128 18:35:23.979754 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:23Z","lastTransitionTime":"2026-01-28T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.081621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.081659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.081669 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.081684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.081693 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.184252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.184320 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.184333 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.184350 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.184362 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.286619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.286663 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.286673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.286687 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.286697 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.390006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.390059 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.390068 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.390083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.390094 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.492535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.492574 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.492582 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.492600 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.492610 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.522615 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:59:13.434179134 +0000 UTC Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.528190 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.528199 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.528247 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.528327 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:24 crc kubenswrapper[4721]: E0128 18:35:24.528508 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:24 crc kubenswrapper[4721]: E0128 18:35:24.528718 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:24 crc kubenswrapper[4721]: E0128 18:35:24.528814 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:24 crc kubenswrapper[4721]: E0128 18:35:24.528917 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.595123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.595154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.595162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.595191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.595204 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.698074 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.698136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.698154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.698204 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.698222 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.800885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.800936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.800947 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.800965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.800975 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.903320 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.903367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.903381 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.903399 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:24 crc kubenswrapper[4721]: I0128 18:35:24.903411 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:24Z","lastTransitionTime":"2026-01-28T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.005462 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.005529 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.005538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.005552 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.005570 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.108645 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.108682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.108691 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.108707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.108716 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.211483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.211546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.211567 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.211593 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.211607 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.314122 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.314156 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.314181 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.314211 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.314222 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.418346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.418424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.418459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.418500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.418524 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.521357 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.522033 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.522382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.522463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.522522 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.523510 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:58:31.73566132 +0000 UTC Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.542160 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.556546 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.570720 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.587290 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.599476 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.609816 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.620539 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.628721 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.628782 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.628797 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.628818 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.628831 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.635685 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.647614 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.665854 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:21Z\\\",\\\"message\\\":\\\"nt handler 7\\\\nI0128 18:35:21.396048 6815 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:35:21.396110 6815 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:35:21.396136 6815 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:35:21.396134 6815 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:35:21.396466 6815 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396513 6815 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:35:21.396589 6815 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0128 18:35:21.396637 6815 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:35:21.396647 6815 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:35:21.396716 6815 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:35:21.396730 6815 factory.go:656] Stopping watch factory\\\\nI0128 18:35:21.396750 6815 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:35:21.396718 6815 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396785 6815 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:35:21.396796 6815 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:35:21.397012 6815 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.680301 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.698287 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.711613 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.726985 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.730955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.730993 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.731004 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.731017 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.731026 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.739534 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.751282 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.763896 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.775820 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.790508 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.833730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.834009 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.834081 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.834216 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.834289 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.937524 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.937588 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.937599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.937618 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:25 crc kubenswrapper[4721]: I0128 18:35:25.937630 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:25Z","lastTransitionTime":"2026-01-28T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.040204 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.040240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.040250 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.040266 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.040275 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.142859 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.142893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.142902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.142919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.142928 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.246047 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.246137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.246160 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.246244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.246265 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.348860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.349784 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.351764 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.351885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.351970 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.454526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.454557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.454566 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.454580 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.454588 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.524010 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:58:35.506105347 +0000 UTC Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.528271 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.528271 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.528317 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.528601 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.528852 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.529039 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.529091 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.529189 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.557294 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.557334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.557346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.557362 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.557371 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.659309 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.659358 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.659370 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.659391 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.659402 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.762296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.762356 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.762368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.762387 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.762398 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.864452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.864487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.864499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.864515 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.864526 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.907757 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.907802 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.907816 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.907831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.907840 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.923103 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:26Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.927993 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.928080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.928095 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.928117 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.928131 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.940582 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:26Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.945898 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.945941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.945950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.945970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.945984 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.959852 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:26Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.963769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.963811 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.963821 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.963838 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.963851 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:26 crc kubenswrapper[4721]: E0128 18:35:26.978059 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:26Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.982454 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.982513 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.982522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.982536 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:26 crc kubenswrapper[4721]: I0128 18:35:26.982545 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:26Z","lastTransitionTime":"2026-01-28T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: E0128 18:35:27.001438 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:26Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:27 crc kubenswrapper[4721]: E0128 18:35:27.001628 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.004288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.004345 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.004359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.004388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.004405 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.107473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.107539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.107553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.107578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.107596 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.210970 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.211018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.211027 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.211048 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.211071 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.314536 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.314612 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.314630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.314659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.314678 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.418694 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.419355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.419568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.419785 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.419990 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.523643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.523722 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.523745 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.523779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.523803 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.524272 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:32:04.404530484 +0000 UTC Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.626458 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.626511 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.626524 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.626546 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.626559 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.729648 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.729723 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.729735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.729754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.729772 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.832786 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.832830 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.832840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.832858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.832868 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.935807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.935844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.935854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.935872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:27 crc kubenswrapper[4721]: I0128 18:35:27.935882 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:27Z","lastTransitionTime":"2026-01-28T18:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.038327 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.038374 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.038386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.038406 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.038421 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.141489 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.141524 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.141535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.141551 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.141633 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.243492 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.243527 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.243536 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.243549 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.243560 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.346866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.346911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.346927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.346944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.346955 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.377568 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.377803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.377849 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.377811492 +0000 UTC m=+158.103117202 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.377955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.377993 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.378042 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378042 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378087 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378109 4721 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378156 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378206 4721 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378220 4721 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378234 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.378206034 +0000 UTC m=+158.103511634 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378241 4721 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378284 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.378263955 +0000 UTC m=+158.103569515 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378322 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.378305457 +0000 UTC m=+158.103611217 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378375 4721 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.378537 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.378501563 +0000 UTC m=+158.103807153 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.449938 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.449988 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.449999 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.450016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.450029 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.525137 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:52:50.521376378 +0000 UTC Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.528617 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.528739 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.528769 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.528824 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.528970 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.529082 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.529367 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:28 crc kubenswrapper[4721]: E0128 18:35:28.529551 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.553672 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.553720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.553730 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.553750 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.553762 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.656795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.656917 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.656939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.657435 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.657475 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.760285 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.760367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.760387 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.760418 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.760434 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.863922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.863978 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.863991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.864011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.864026 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.966987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.967046 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.967061 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.967083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:28 crc kubenswrapper[4721]: I0128 18:35:28.967099 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:28Z","lastTransitionTime":"2026-01-28T18:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.069636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.069706 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.069718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.069740 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.069758 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.173130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.173203 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.173219 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.173237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.173251 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.276767 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.276823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.276834 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.276854 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.276868 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.379819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.379867 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.379877 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.379896 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.379908 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.481758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.481869 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.481887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.481916 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.481930 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.526134 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:47:58.010028293 +0000 UTC Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.585210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.585251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.585262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.585278 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.585292 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.687573 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.687953 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.688132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.688292 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.688416 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.791820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.792062 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.792129 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.792216 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.792288 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.894679 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.894711 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.894720 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.894735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.894745 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.996927 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.997231 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.997313 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.997392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:29 crc kubenswrapper[4721]: I0128 18:35:29.997586 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:29Z","lastTransitionTime":"2026-01-28T18:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.100547 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.100606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.100621 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.100680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.100692 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.203165 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.203289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.203315 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.203354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.203378 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.306615 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.306961 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.307050 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.307210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.307348 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.410599 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.410668 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.410685 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.410716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.410734 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.513424 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.513712 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.513786 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.513855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.513920 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.526406 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:40:19.756605813 +0000 UTC Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.528732 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.528840 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:30 crc kubenswrapper[4721]: E0128 18:35:30.528905 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.528747 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:30 crc kubenswrapper[4721]: E0128 18:35:30.528992 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.529137 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:30 crc kubenswrapper[4721]: E0128 18:35:30.529142 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:30 crc kubenswrapper[4721]: E0128 18:35:30.529462 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.623342 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.623384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.623395 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.623411 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.623424 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.725482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.725523 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.725531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.725545 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.725553 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.827939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.828146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.828247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.828314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.828381 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.930470 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.930500 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.930507 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.930521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:30 crc kubenswrapper[4721]: I0128 18:35:30.930529 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:30Z","lastTransitionTime":"2026-01-28T18:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.032934 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.032983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.032996 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.033021 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.033034 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.135485 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.135530 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.135543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.135559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.135572 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.237502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.237535 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.237542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.237555 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.237564 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.340365 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.340439 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.340449 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.340470 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.340482 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.443455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.443511 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.443521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.443538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.443547 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.526880 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:20:01.435563765 +0000 UTC Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.545517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.545553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.545563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.545578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.545589 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.647787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.647823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.647831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.647844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.647875 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.749955 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.749996 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.750006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.750019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.750028 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.852502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.852553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.852565 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.852583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.852596 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.954585 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.954664 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.954678 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.954692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:31 crc kubenswrapper[4721]: I0128 18:35:31.954702 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:31Z","lastTransitionTime":"2026-01-28T18:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.057016 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.057056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.057067 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.057083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.057094 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.159516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.159554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.159563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.159577 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.159586 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.261478 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.261513 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.261522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.261536 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.261546 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.363936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.363987 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.364000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.364023 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.364035 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.466494 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.466543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.466554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.466572 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.466587 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.527905 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:58:49.396431105 +0000 UTC Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.528057 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.528086 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:32 crc kubenswrapper[4721]: E0128 18:35:32.528152 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:32 crc kubenswrapper[4721]: E0128 18:35:32.528351 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.528409 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:32 crc kubenswrapper[4721]: E0128 18:35:32.528491 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.528375 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:32 crc kubenswrapper[4721]: E0128 18:35:32.528576 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.569591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.569636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.569646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.569664 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.569676 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.671825 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.671873 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.671885 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.671908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.671919 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.773913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.773947 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.773958 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.773973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.773983 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.876323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.876366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.876376 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.876392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.876433 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.978636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.978673 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.978681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.978693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:32 crc kubenswrapper[4721]: I0128 18:35:32.978707 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:32Z","lastTransitionTime":"2026-01-28T18:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.081518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.081570 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.081581 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.081636 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.081648 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.183787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.184028 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.184109 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.184222 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.184315 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.286726 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.286768 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.286779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.286798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.286808 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.390291 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.390352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.390365 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.390384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.390398 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.493210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.493290 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.493303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.493322 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.493333 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.528763 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:56:31.362854834 +0000 UTC Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.595044 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.595080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.595092 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.595108 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.595119 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.699444 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.699479 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.699487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.699506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.699516 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.801382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.801467 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.801480 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.801499 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.801512 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.904334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.904392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.904404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.904427 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:33 crc kubenswrapper[4721]: I0128 18:35:33.904437 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:33Z","lastTransitionTime":"2026-01-28T18:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.007157 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.007247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.007259 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.007277 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.007291 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.109314 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.109359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.109368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.109383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.109392 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.211956 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.212012 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.212029 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.212048 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.212060 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.316150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.316255 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.316273 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.316299 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.316314 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.419833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.419878 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.419887 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.419903 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.419913 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.521652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.521698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.521712 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.521729 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.521741 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.528053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.528127 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:34 crc kubenswrapper[4721]: E0128 18:35:34.528146 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.528163 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.528225 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:34 crc kubenswrapper[4721]: E0128 18:35:34.528333 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:34 crc kubenswrapper[4721]: E0128 18:35:34.528404 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:34 crc kubenswrapper[4721]: E0128 18:35:34.528493 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.529254 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:37:35.989699654 +0000 UTC Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.624182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.624222 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.624235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.624251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.624261 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.726232 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.726277 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.726288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.726307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.726319 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.829591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.829884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.829904 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.829922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.829934 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.932466 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.932506 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.932516 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.932533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:34 crc kubenswrapper[4721]: I0128 18:35:34.932544 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:34Z","lastTransitionTime":"2026-01-28T18:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.035251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.035306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.035316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.035331 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.035343 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.137052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.137094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.137105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.137120 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.137132 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.241283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.241322 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.241334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.241352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.241365 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.343534 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.343594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.343606 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.343623 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.343635 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.447237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.447283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.447293 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.447309 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.447320 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.529444 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:43:45.621765459 +0000 UTC Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.541213 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.550899 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.550969 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.550984 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.551004 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.551016 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.551963 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.563904 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.578543 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.604741 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.619420 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.630695 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.645686 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.653538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.653598 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.653610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.653632 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.653650 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.664011 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.687658 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:21Z\\\",\\\"message\\\":\\\"nt handler 7\\\\nI0128 18:35:21.396048 6815 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:35:21.396110 6815 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:35:21.396136 6815 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:35:21.396134 6815 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:35:21.396466 6815 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396513 6815 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:35:21.396589 6815 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0128 18:35:21.396637 6815 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:35:21.396647 6815 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:35:21.396716 6815 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:35:21.396730 6815 factory.go:656] Stopping watch factory\\\\nI0128 18:35:21.396750 6815 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:35:21.396718 6815 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396785 6815 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:35:21.396796 6815 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:35:21.397012 6815 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.700214 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.713197 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.725298 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.739234 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.751032 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.755338 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.755391 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.755403 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.755422 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.755433 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.764685 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.780736 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.792343 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.801609 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.858292 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.858332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.858341 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.858354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.858362 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.960582 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.960638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.960650 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.960670 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:35 crc kubenswrapper[4721]: I0128 18:35:35.960683 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:35Z","lastTransitionTime":"2026-01-28T18:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.062205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.062236 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.062243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.062255 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.062263 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.164121 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.164192 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.164203 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.164221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.164235 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.266714 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.266754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.266765 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.266783 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.266794 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.369130 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.369208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.369221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.369242 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.369259 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.471457 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.471502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.471514 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.471531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.471542 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.528416 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.528411 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.528430 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.528435 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:36 crc kubenswrapper[4721]: E0128 18:35:36.528555 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:36 crc kubenswrapper[4721]: E0128 18:35:36.528872 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:36 crc kubenswrapper[4721]: E0128 18:35:36.528865 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:36 crc kubenswrapper[4721]: E0128 18:35:36.528927 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.530471 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:56:51.990475503 +0000 UTC Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.573865 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.573912 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.573920 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.573936 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.573946 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.676252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.676307 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.676317 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.676332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.676342 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.778398 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.778430 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.778442 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.778459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.778469 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.880976 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.881032 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.881043 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.881060 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.881072 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.983664 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.983697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.983705 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.983719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:36 crc kubenswrapper[4721]: I0128 18:35:36.983727 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:36Z","lastTransitionTime":"2026-01-28T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.085813 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.085858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.085886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.085905 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.085915 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.188624 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.188670 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.188681 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.188697 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.188709 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.241000 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.241035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.241043 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.241058 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.241067 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.262359 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.266069 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.266111 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.266123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.266141 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.266152 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.279428 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.283652 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.283680 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.283689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.283702 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.283711 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.304055 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.308178 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.308231 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.308244 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.308261 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.308272 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.319954 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.324100 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.324220 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.324235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.324259 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.324279 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.336758 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.336887 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.338294 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.338326 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.338337 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.338355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.338366 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.440684 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.440716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.440724 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.440737 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.440747 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.528795 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:35:37 crc kubenswrapper[4721]: E0128 18:35:37.528951 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.531438 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:21:30.882861992 +0000 UTC Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.543138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.543191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.543205 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.543220 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.543232 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.646322 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.646377 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.646389 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.646409 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.646421 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.749126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.749154 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.749162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.749187 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.749200 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.851756 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.851795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.851813 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.851831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.851842 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.954260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.954308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.954321 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.954340 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:37 crc kubenswrapper[4721]: I0128 18:35:37.954351 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:37Z","lastTransitionTime":"2026-01-28T18:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.055951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.055997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.056007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.056024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.056035 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.158948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.159301 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.159388 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.159469 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.159535 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.261402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.261432 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.261440 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.261453 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.261461 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.363831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.363866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.363877 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.363893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.363905 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.466202 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.466241 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.466249 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.466264 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.466274 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.528233 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.528300 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.528359 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:38 crc kubenswrapper[4721]: E0128 18:35:38.528501 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.528567 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:38 crc kubenswrapper[4721]: E0128 18:35:38.528689 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:38 crc kubenswrapper[4721]: E0128 18:35:38.528709 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:38 crc kubenswrapper[4721]: E0128 18:35:38.528747 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.532248 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:00:01.178367726 +0000 UTC Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.568396 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.568473 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.568487 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.568507 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.568519 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.670592 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.670659 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.670676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.670695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.670710 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.773153 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.773223 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.773233 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.773248 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.773258 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.875355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.875401 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.875416 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.875434 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.875445 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.977651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.977689 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.977701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.977719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:38 crc kubenswrapper[4721]: I0128 18:35:38.977731 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:38Z","lastTransitionTime":"2026-01-28T18:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.080308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.080358 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.080368 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.080387 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.080398 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.183351 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.183386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.183397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.183414 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.183425 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.285434 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.285472 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.285502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.285518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.285529 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.387963 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.388032 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.388047 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.388064 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.388102 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.491717 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.491777 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.491789 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.491811 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.491824 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.532543 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:23:13.91080054 +0000 UTC Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.594861 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.594909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.594921 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.594939 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.594950 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.697157 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.697274 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.697287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.697305 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.697316 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.800630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.800693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.800717 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.800742 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.800760 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.903568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.903635 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.903649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.903674 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:39 crc kubenswrapper[4721]: I0128 18:35:39.903687 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:39Z","lastTransitionTime":"2026-01-28T18:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.005614 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.005676 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.005694 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.005718 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.005735 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.108862 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.108914 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.108929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.108947 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.108957 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.212128 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.212303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.212316 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.212334 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.212345 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.315690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.315755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.315769 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.315795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.315809 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.418615 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.418651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.418660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.418674 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.418737 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.521337 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.521389 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.521401 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.521420 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.521435 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.528666 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.528743 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:40 crc kubenswrapper[4721]: E0128 18:35:40.528813 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.528821 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.528852 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:40 crc kubenswrapper[4721]: E0128 18:35:40.528912 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:40 crc kubenswrapper[4721]: E0128 18:35:40.529005 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:40 crc kubenswrapper[4721]: E0128 18:35:40.529162 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.533026 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:48:27.646067153 +0000 UTC Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.623052 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.623082 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.623091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.623103 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.623112 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.725471 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.725526 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.725540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.725559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.725570 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.827951 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.828077 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.828092 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.828111 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.828121 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.930135 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.930208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.930222 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.930241 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:40 crc kubenswrapper[4721]: I0128 18:35:40.930253 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:40Z","lastTransitionTime":"2026-01-28T18:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.032931 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.032980 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.032989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.033002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.033011 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.135452 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.135484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.135493 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.135509 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.135521 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.237127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.237195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.237210 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.237228 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.237240 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.339478 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.339531 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.339540 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.339557 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.339567 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.441807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.441839 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.441848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.441860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.441869 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.533124 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:45:45.09502415 +0000 UTC Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.544519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.544563 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.544578 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.544593 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.544604 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.646984 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.647051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.647065 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.647083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.647095 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.749779 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.749833 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.749847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.749867 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.749877 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.853088 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.853127 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.853137 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.853152 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.853165 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.955898 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.955941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.955952 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.955969 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:41 crc kubenswrapper[4721]: I0128 18:35:41.955981 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:41Z","lastTransitionTime":"2026-01-28T18:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.058701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.058758 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.058770 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.058786 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.058816 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.161736 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.161807 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.161819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.161835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.161846 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.264690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.264732 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.264743 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.264759 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.264769 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.368072 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.368117 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.368126 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.368143 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.368153 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.470872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.470909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.470920 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.470935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.470946 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.527750 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.527828 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:42 crc kubenswrapper[4721]: E0128 18:35:42.527921 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.527765 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.527959 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:42 crc kubenswrapper[4721]: E0128 18:35:42.528079 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:42 crc kubenswrapper[4721]: E0128 18:35:42.528129 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:42 crc kubenswrapper[4721]: E0128 18:35:42.528239 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.534071 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:43:06.300497293 +0000 UTC Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.573035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.573064 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.573071 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.573084 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.573105 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.674709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.674755 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.674766 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.674781 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.674792 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.777081 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.777124 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.777138 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.777155 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.777165 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.879795 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.879837 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.879848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.879866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.879878 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.982610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.982646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.982654 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.982667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:42 crc kubenswrapper[4721]: I0128 18:35:42.982676 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:42Z","lastTransitionTime":"2026-01-28T18:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.084944 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.084981 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.084991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.085007 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.085019 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.187699 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.187844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.187855 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.187869 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.187880 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.290787 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.290846 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.290858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.290874 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.290887 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.393380 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.393434 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.393460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.393483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.393497 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.496040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.496091 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.496104 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.496123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.496139 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.535223 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:11:48.384804738 +0000 UTC Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.599392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.599447 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.599463 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.599483 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.599500 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.701958 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.701992 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.702002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.702018 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.702029 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.804792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.804831 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.804847 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.804863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.804874 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.907325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.907359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.907371 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.907404 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:43 crc kubenswrapper[4721]: I0128 18:35:43.907420 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:43Z","lastTransitionTime":"2026-01-28T18:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.009416 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.010247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.010289 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.010331 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.010357 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.113226 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.113296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.113329 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.113352 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.113364 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.216011 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.216056 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.216066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.216080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.216091 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.317842 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.317886 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.317895 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.317908 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.317918 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.420064 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.420116 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.420129 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.420148 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.420163 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.522534 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.522596 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.522611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.522630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.522644 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.528256 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.528286 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:44 crc kubenswrapper[4721]: E0128 18:35:44.528488 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.528320 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.528321 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:44 crc kubenswrapper[4721]: E0128 18:35:44.528561 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:44 crc kubenswrapper[4721]: E0128 18:35:44.528651 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:44 crc kubenswrapper[4721]: E0128 18:35:44.528877 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.536302 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:31:14.842711537 +0000 UTC Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.625963 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.626015 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.626024 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.626040 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.626050 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.729162 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.729243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.729260 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.729282 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.729301 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.832114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.832196 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.832208 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.832220 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.832228 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.934134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.934193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.934202 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.934218 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:44 crc kubenswrapper[4721]: I0128 18:35:44.934226 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:44Z","lastTransitionTime":"2026-01-28T18:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.036656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.036695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.036704 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.036716 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.036725 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.139010 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.139099 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.139114 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.139131 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.139142 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.242415 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.242484 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.242502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.242525 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.242542 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.345538 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.345587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.345597 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.345611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.345621 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.448619 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.448670 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.448682 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.448704 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.448719 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.536850 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:58:14.523371181 +0000 UTC Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.546759 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa3af2d00ecbcf4d30c4c81191306dd45625f55032058b314ce2b91f6b2033e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.551135 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.551186 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.551195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.551209 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.551220 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.561395 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9843dea4333a77dd5b0005984b9f8e7c7c993b28f89f5d6432477bfde3383339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4cj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-76rx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.572600 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e670fafb-703c-4cc9-b670-d25ae62d87a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f97092be25e90c4a15af043397b1cbcefdb3a3511a80a046496bef807abc8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8351f80de5ab5d11c5a87270e69a8ebd20b3a804671e20b991f1fc77ba27bae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8465df12048ab0feaba16e1935fa17feb4fe967ab3e4ef37981bed51ff77911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47389981052e7ab8530932e01c95d9b178f2c55b21e03b460d3f02dddbcf4830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.585523 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.598050 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.608873 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jqvck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3440038-c980-4fb4-be99-235515ec221c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jqvck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.619395 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf9d0d3-f468-468d-a84a-376800ac08e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff405f7132b5a1a0e2abc66c8e4c0abbd732bdf90cb2b4b2867dd10b8e62921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0406ac4c224de266eab94d3b9c12d110ee36b7ccb34381fc284ade389ba042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.631233 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rk2l2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdd30376-1599-4efc-bb55-7585e8702b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0ff1b9df47dfd65597208cdd31f5cb8ce7f4c9c170d298db28d6f04da7b7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wknj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rk2l2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.644729 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac7e75e-c5bb-4b57-b2ba-9ebe8b8fbd88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cba8dd293cf5ae7b0c987c6ee3b24da02d2687ee54292da92e28ca627ed3eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a3c1211b73ca96ac22854f0cb677a0088a679ad56b104ea6b8e0871884a3a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqp9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x8hw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.653449 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.653495 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.653505 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.653519 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.653531 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.656813 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.667742 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56147b9675ae31e5e8b4473346c69d90a1013ab084214a7fc34295f810b229a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3277d9793a59f2e2a2b4d015782f383df20b594be435f894d24dbe5b837cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.685434 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70686e42-b434-4ff9-9753-cfc870beef82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:21Z\\\",\\\"message\\\":\\\"nt handler 7\\\\nI0128 18:35:21.396048 6815 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:35:21.396110 6815 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:35:21.396136 6815 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:35:21.396134 6815 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:35:21.396466 6815 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396513 6815 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:35:21.396589 6815 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0128 18:35:21.396637 6815 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:35:21.396647 6815 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:35:21.396716 6815 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:35:21.396730 6815 factory.go:656] Stopping watch factory\\\\nI0128 18:35:21.396750 6815 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:35:21.396718 6815 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:35:21.396785 6815 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:35:21.396796 6815 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:35:21.397012 6815 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7lkbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wr282\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.698150 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rgqdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0a22020-3f34-4895-beec-2ed5d829ea79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:35:18Z\\\",\\\"message\\\":\\\"2026-01-28T18:34:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b\\\\n2026-01-28T18:34:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_77ae07de-fe49-4308-874b-bb02ed4b202b to /host/opt/cni/bin/\\\\n2026-01-28T18:34:33Z [verbose] multus-daemon started\\\\n2026-01-28T18:34:33Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:35:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l86pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rgqdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.716380 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50459cb7-bb46-4f2f-a119-03a6102ad146\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97a290731225e8b998c044c9547db36b5af73889929db5fbc37b6b4170f5dbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9efdfabfba8f11d2bdeafad77520260137fcfff3ca51ae77afd4ee9b141f602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3c1b4d37e11f8a4ee9539e5ea3cce77b7d382133b9e59ccd9c3e8aa2b308754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1199a0611555765afe8fcf017742788488064c958d0c75c46d51ddc2ff9d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316506ed1e9f91314d427e1b6c3a44ee58d0c4a3a8a93b97bdf098703a6a2f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f1a6a669a0341baa2d7fb17bcebfe702f96d1192ca35c3f2d00e463778271d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3a248e15466c89ce3b237a22ce54365004ea86dc1541e1ac44e028f91bf19f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2e1f3a63445bad45029602f52863bfdf2fab495711e3318d68fee042530acb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.729627 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f18835-9a2d-4427-bc71-e4cd48b94c19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:34:21Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 18:34:20.724978 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 18:34:20.725313 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:34:20.726636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2540251349/tls.crt::/tmp/serving-cert-2540251349/tls.key\\\\\\\"\\\\nI0128 18:34:21.177830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:34:21.196258 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:34:21.196291 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:34:21.196317 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:34:21.196323 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:34:21.230729 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 18:34:21.230760 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 18:34:21.230771 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230781 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:34:21.230789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:34:21.230795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:34:21.230800 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:34:21.230804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 18:34:21.233981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.741455 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86fc797f-6769-4801-8cb0-b9f25c9ec29b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e8c8cd370cc146f15325497afef171437a3c7c3d4e82cda99a321bf847399bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6143e461a697375ca79df96494f8cf4a575e622428db241f50a52ae1c67acbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e37a3261383445df2e4050fb0b3f92c0afc6379c71eb061475783a72cd37459f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:33:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.754374 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef61ab09457440960cf27586f65751f928ccc999db19fd16364262ce2449cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.755450 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.755476 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.755486 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.755502 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.755512 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.763630 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lf92l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20d04cbd-fcf1-4d48-9cca-1dd29b13c938\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://709d01fbeba51f6f21b60d9fd60aa9b44ba9e4f2504c19b7fd8d279f8c13c1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvqr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lf92l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.777411 4721 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7vsph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"942c0bcf-8f75-42e8-a5c0-af4c640eb13c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce38a3c3aa72d0816ff32ec77af100dc6f2a8761362992f01b88f836c44f65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5222e9a5ac55f20acbe1105e84f22c96889f410246b7ef423236f5d3e216f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e702f148f20abb9e5e9453eb1b9ae0cc094138beb1fa048753ff3b497517e942\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a55ef87212072ac9f6d05d27d54f7a3f13add617b1a07a2fd1e0a652fe15cb3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7d40aa704ad5ddaaa471ab8606dcece38b229fd8affcce9bb1959e5c19f092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1829c2eed51af896c5ccfa25c1f23ea971ea9d54e9db53415095c5ea43ac4062\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b576c2dcc0ad83742dbc75f744d08eb8b881f9be4f9d16f4d4d3004ce174f24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:34:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s84mk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7vsph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.858594 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.858631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.858647 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.858667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.858678 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.961045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.961080 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.961089 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.961103 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:45 crc kubenswrapper[4721]: I0128 18:35:45.961112 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:45Z","lastTransitionTime":"2026-01-28T18:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.063929 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.063974 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.063983 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.063997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.064007 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.166181 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.166220 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.166229 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.166243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.166252 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.268090 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.268136 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.268146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.268161 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.268207 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.370235 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.370272 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.370283 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.370299 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.370309 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.472269 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.472339 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.472353 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.472392 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.472407 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.528350 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:46 crc kubenswrapper[4721]: E0128 18:35:46.528482 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.528680 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:46 crc kubenswrapper[4721]: E0128 18:35:46.528742 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.528851 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:46 crc kubenswrapper[4721]: E0128 18:35:46.528897 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.528999 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:46 crc kubenswrapper[4721]: E0128 18:35:46.529053 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.537346 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:09:46.238820233 +0000 UTC Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.574846 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.574889 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.574898 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.574912 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.574923 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.677757 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.677792 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.677800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.677846 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.677860 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.779863 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.779913 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.779925 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.779942 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.779957 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.881963 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.882034 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.882046 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.882062 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.882073 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.984025 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.984066 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.984075 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.984088 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:46 crc kubenswrapper[4721]: I0128 18:35:46.984099 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:46Z","lastTransitionTime":"2026-01-28T18:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.086989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.087038 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.087046 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.087058 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.087067 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.189609 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.189656 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.189669 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.189695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.189707 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.292251 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.292310 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.292319 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.292333 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.292342 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.394257 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.394308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.394318 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.394332 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.394343 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.423725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.423754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.423762 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.423774 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.423783 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: E0128 18:35:47.434472 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.437935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.437965 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.437973 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.437985 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.437996 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: E0128 18:35:47.448036 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.451820 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.451872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.451911 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.451935 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.451947 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: E0128 18:35:47.465916 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.469067 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.469105 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.469116 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.469132 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.469143 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: E0128 18:35:47.481546 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.486793 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.486848 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.486864 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.486884 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.486896 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: E0128 18:35:47.500522 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7e2b6ea9-0dbd-4f62-9b1e-c7fac2eb6b3f\\\",\\\"systemUUID\\\":\\\"09e691cb-0cac-419d-a3e2-104cada8c62f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:35:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:35:47 crc kubenswrapper[4721]: E0128 18:35:47.500685 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.502518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.502559 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.502568 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.502587 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.502596 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.538485 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:49:11.648916523 +0000 UTC Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.604852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.604893 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.604901 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.604919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.604929 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.707191 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.707230 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.707240 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.707254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.707265 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.810287 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.810355 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.810367 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.810384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.810414 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.913182 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.913224 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.913234 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.913248 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:47 crc kubenswrapper[4721]: I0128 18:35:47.913259 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:47Z","lastTransitionTime":"2026-01-28T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.018497 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.018533 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.018543 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.018561 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.018572 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.120997 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.121035 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.121045 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.121062 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.121072 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.223825 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.223871 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.223882 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.223900 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.223912 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.326039 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.326073 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.326081 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.326094 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.326102 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.428591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.428631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.428643 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.428660 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.428673 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.495758 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:48 crc kubenswrapper[4721]: E0128 18:35:48.495891 4721 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:35:48 crc kubenswrapper[4721]: E0128 18:35:48.495941 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs podName:f3440038-c980-4fb4-be99-235515ec221c nodeName:}" failed. No retries permitted until 2026-01-28 18:36:52.495926532 +0000 UTC m=+178.221232092 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs") pod "network-metrics-daemon-jqvck" (UID: "f3440038-c980-4fb4-be99-235515ec221c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.527812 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.527905 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.527956 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:48 crc kubenswrapper[4721]: E0128 18:35:48.528100 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.528141 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:48 crc kubenswrapper[4721]: E0128 18:35:48.528265 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:48 crc kubenswrapper[4721]: E0128 18:35:48.528370 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:48 crc kubenswrapper[4721]: E0128 18:35:48.528647 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.530303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.530364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.530382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.530402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.530419 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.539460 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:07:34.333947068 +0000 UTC Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.632780 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.632818 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.632826 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.632840 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.632848 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.736296 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.736354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.736378 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.736407 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.736427 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.839163 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.839233 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.839243 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.839256 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.839273 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.941947 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.942002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.942013 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.942031 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:48 crc kubenswrapper[4721]: I0128 18:35:48.942043 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:48Z","lastTransitionTime":"2026-01-28T18:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.043918 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.043950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.043959 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.043974 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.043984 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.146696 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.146744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.146762 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.146776 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.146785 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.248719 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.248775 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.248794 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.248819 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.248836 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.350862 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.350892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.350909 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.350922 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.350931 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.452553 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.452597 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.452609 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.452625 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.452636 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.529410 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:35:49 crc kubenswrapper[4721]: E0128 18:35:49.529608 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wr282_openshift-ovn-kubernetes(70686e42-b434-4ff9-9753-cfc870beef82)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.539680 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:54:07.7771095 +0000 UTC Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.554899 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.554959 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.554972 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.554989 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.555000 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.656959 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.657002 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.657014 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.657038 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.657052 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.759665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.759708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.759717 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.759739 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.759748 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.862402 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.862438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.862451 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.862468 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.862481 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.965825 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.965872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.965883 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.965902 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:49 crc kubenswrapper[4721]: I0128 18:35:49.965914 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:49Z","lastTransitionTime":"2026-01-28T18:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.068698 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.068738 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.068747 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.068762 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.068770 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.170722 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.170754 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.170778 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.170791 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.170800 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.273744 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.273798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.273809 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.273829 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.273841 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.379583 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.379926 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.379948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.379969 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.379988 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.482338 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.482382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.482394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.482413 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.482426 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.527752 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.527781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.527811 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.527772 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:50 crc kubenswrapper[4721]: E0128 18:35:50.527906 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:50 crc kubenswrapper[4721]: E0128 18:35:50.527979 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:50 crc kubenswrapper[4721]: E0128 18:35:50.528053 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:50 crc kubenswrapper[4721]: E0128 18:35:50.528106 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.539932 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:54:45.80202427 +0000 UTC Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.584644 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.584685 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.584695 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.584709 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.584718 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.686932 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.686967 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.686977 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.686991 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.687001 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.789888 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.789919 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.789931 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.789948 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.789959 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.893398 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.893486 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.893498 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.893517 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.893531 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.996804 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.996844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.996857 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.996874 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:50 crc kubenswrapper[4721]: I0128 18:35:50.996885 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:50Z","lastTransitionTime":"2026-01-28T18:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.098603 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.098651 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.098665 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.098683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.098695 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.201646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.201690 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.201704 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.201725 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.201751 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.304354 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.304386 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.304394 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.304407 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.304416 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.406405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.406443 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.406454 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.406480 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.406490 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.508461 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.508496 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.508504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.508518 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.508526 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.540129 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:03:34.052836697 +0000 UTC Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.611221 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.611276 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.611288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.611306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.611318 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.713359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.713393 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.713405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.713420 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.713430 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.815455 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.815504 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.815521 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.815542 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.815557 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.917554 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.917601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.917611 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.917625 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:51 crc kubenswrapper[4721]: I0128 18:35:51.917633 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:51Z","lastTransitionTime":"2026-01-28T18:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.019866 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.019914 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.019925 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.019941 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.019951 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.122012 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.122051 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.122060 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.122075 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.122085 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.224412 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.224454 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.224465 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.224482 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.224492 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.327539 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.327610 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.327630 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.327667 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.327686 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.431237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.431279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.431288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.431306 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.431316 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.528698 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.528788 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.528698 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:52 crc kubenswrapper[4721]: E0128 18:35:52.528889 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.528728 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:52 crc kubenswrapper[4721]: E0128 18:35:52.529027 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:52 crc kubenswrapper[4721]: E0128 18:35:52.529304 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:52 crc kubenswrapper[4721]: E0128 18:35:52.529319 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.533301 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.533350 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.533366 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.533384 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.533396 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.540751 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:10:07.64537972 +0000 UTC Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.635146 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.635201 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.635211 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.635225 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.635234 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.738591 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.738622 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.738631 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.738644 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.738654 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.841320 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.841371 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.841381 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.841397 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.841409 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.944311 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.944364 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.944383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.944405 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:52 crc kubenswrapper[4721]: I0128 18:35:52.944418 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:52Z","lastTransitionTime":"2026-01-28T18:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.052735 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.052810 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.052823 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.052849 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.052861 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.155841 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.155883 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.155892 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.155906 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.155915 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.258671 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.258708 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.258717 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.258731 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.258744 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.361083 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.361134 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.361150 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.361195 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.361215 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.463359 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.463413 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.463426 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.463444 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.463457 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.541902 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:23:43.089867267 +0000 UTC Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.565800 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.565835 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.565844 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.565858 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.565867 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.667975 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.668012 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.668022 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.668039 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.668048 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.770288 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.770323 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.770333 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.770346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.770358 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.872860 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.872923 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.872932 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.872945 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.872954 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.975701 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.975798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.975824 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.975872 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:53 crc kubenswrapper[4721]: I0128 18:35:53.975899 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:53Z","lastTransitionTime":"2026-01-28T18:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.077950 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.078006 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.078019 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.078037 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.078054 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.180389 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.180459 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.180475 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.180501 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.180523 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.284207 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.284286 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.284303 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.284327 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.284345 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.387123 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.387308 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.387346 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.387383 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.387406 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.490193 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.490237 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.490247 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.490262 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.490272 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.527715 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.527757 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.527778 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.527781 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:54 crc kubenswrapper[4721]: E0128 18:35:54.527875 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:54 crc kubenswrapper[4721]: E0128 18:35:54.527941 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:54 crc kubenswrapper[4721]: E0128 18:35:54.528075 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:54 crc kubenswrapper[4721]: E0128 18:35:54.528237 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.542338 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:10:46.138309915 +0000 UTC Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.593217 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.593252 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.593263 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.593280 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.593296 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.695302 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.695338 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.695347 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.695360 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.695369 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.798808 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.798852 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.798861 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.798876 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.798886 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.901214 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.901254 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.901264 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.901279 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:54 crc kubenswrapper[4721]: I0128 18:35:54.901291 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:54Z","lastTransitionTime":"2026-01-28T18:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.003601 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.003646 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.003662 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.003679 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.003691 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:55Z","lastTransitionTime":"2026-01-28T18:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.106460 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.106510 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.106522 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.106541 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.106557 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:55Z","lastTransitionTime":"2026-01-28T18:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.209520 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.209617 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.209638 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.209674 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.209697 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:55Z","lastTransitionTime":"2026-01-28T18:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.312325 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.312372 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.312382 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.312398 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.312407 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:55Z","lastTransitionTime":"2026-01-28T18:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.414649 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.414683 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.414692 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.414707 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.414716 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:55Z","lastTransitionTime":"2026-01-28T18:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:55 crc kubenswrapper[4721]: E0128 18:35:55.515076 4721 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.542447 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:37:23.301725809 +0000 UTC Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.570931 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x8hw8" podStartSLOduration=85.570907796 podStartE2EDuration="1m25.570907796s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.558381027 +0000 UTC m=+121.283686607" watchObservedRunningTime="2026-01-28 18:35:55.570907796 +0000 UTC m=+121.296213366" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.583002 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.582980941 podStartE2EDuration="41.582980941s" podCreationTimestamp="2026-01-28 18:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.582230387 +0000 UTC m=+121.307535957" watchObservedRunningTime="2026-01-28 18:35:55.582980941 +0000 UTC m=+121.308286501" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.593657 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rk2l2" podStartSLOduration=85.593639282 podStartE2EDuration="1m25.593639282s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.593052163 +0000 UTC m=+121.318357723" watchObservedRunningTime="2026-01-28 18:35:55.593639282 +0000 UTC m=+121.318944842" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.608095 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.60807884 podStartE2EDuration="1m32.60807884s" podCreationTimestamp="2026-01-28 18:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.607835883 +0000 UTC m=+121.333141453" watchObservedRunningTime="2026-01-28 18:35:55.60807884 +0000 UTC m=+121.333384400" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.698227 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rgqdt" podStartSLOduration=85.69820631 podStartE2EDuration="1m25.69820631s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.677350662 +0000 UTC m=+121.402656232" watchObservedRunningTime="2026-01-28 18:35:55.69820631 +0000 UTC m=+121.423511880" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.698640 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=91.698634793 podStartE2EDuration="1m31.698634793s" podCreationTimestamp="2026-01-28 18:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.697930681 +0000 UTC m=+121.423236241" watchObservedRunningTime="2026-01-28 18:35:55.698634793 +0000 UTC m=+121.423940353" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.714098 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.714078602 podStartE2EDuration="1m31.714078602s" podCreationTimestamp="2026-01-28 18:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.713430082 +0000 UTC m=+121.438735662" watchObservedRunningTime="2026-01-28 18:35:55.714078602 +0000 UTC m=+121.439384162" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.731104 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7vsph" podStartSLOduration=85.731086111 podStartE2EDuration="1m25.731086111s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.730616606 +0000 UTC m=+121.455922176" watchObservedRunningTime="2026-01-28 18:35:55.731086111 +0000 UTC m=+121.456391671" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.757298 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lf92l" podStartSLOduration=85.757278995 podStartE2EDuration="1m25.757278995s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.756867322 +0000 UTC m=+121.482172882" watchObservedRunningTime="2026-01-28 18:35:55.757278995 +0000 UTC m=+121.482584555" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.820912 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podStartSLOduration=85.82088921 podStartE2EDuration="1m25.82088921s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.820592431 +0000 UTC m=+121.545897991" watchObservedRunningTime="2026-01-28 18:35:55.82088921 +0000 UTC m=+121.546194770" Jan 28 18:35:55 crc kubenswrapper[4721]: I0128 18:35:55.853224 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=66.853205153 podStartE2EDuration="1m6.853205153s" podCreationTimestamp="2026-01-28 18:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:55.837782805 +0000 UTC m=+121.563088365" watchObservedRunningTime="2026-01-28 18:35:55.853205153 +0000 UTC m=+121.578510713" Jan 28 18:35:55 crc kubenswrapper[4721]: E0128 18:35:55.982458 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:35:56 crc kubenswrapper[4721]: I0128 18:35:56.528344 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:56 crc kubenswrapper[4721]: I0128 18:35:56.528426 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:56 crc kubenswrapper[4721]: I0128 18:35:56.528435 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:56 crc kubenswrapper[4721]: I0128 18:35:56.528377 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:56 crc kubenswrapper[4721]: E0128 18:35:56.528666 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:56 crc kubenswrapper[4721]: E0128 18:35:56.528798 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:56 crc kubenswrapper[4721]: E0128 18:35:56.528864 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:56 crc kubenswrapper[4721]: E0128 18:35:56.528975 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:56 crc kubenswrapper[4721]: I0128 18:35:56.543057 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:17:25.252884316 +0000 UTC Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.543866 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:24:18.777084147 +0000 UTC Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.561693 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.561798 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.561817 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.561838 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.561852 4721 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:35:57Z","lastTransitionTime":"2026-01-28T18:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.599997 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8"] Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.600404 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.602508 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.602696 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.602890 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.603790 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.698797 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8afd97ad-c3f3-4b57-b703-727585cbe58e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.698861 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8afd97ad-c3f3-4b57-b703-727585cbe58e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.698880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8afd97ad-c3f3-4b57-b703-727585cbe58e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.698894 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8afd97ad-c3f3-4b57-b703-727585cbe58e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.698909 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8afd97ad-c3f3-4b57-b703-727585cbe58e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799711 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8afd97ad-c3f3-4b57-b703-727585cbe58e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799802 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8afd97ad-c3f3-4b57-b703-727585cbe58e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799821 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8afd97ad-c3f3-4b57-b703-727585cbe58e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799840 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8afd97ad-c3f3-4b57-b703-727585cbe58e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799861 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8afd97ad-c3f3-4b57-b703-727585cbe58e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799875 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8afd97ad-c3f3-4b57-b703-727585cbe58e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.799835 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8afd97ad-c3f3-4b57-b703-727585cbe58e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.800869 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8afd97ad-c3f3-4b57-b703-727585cbe58e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.806802 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8afd97ad-c3f3-4b57-b703-727585cbe58e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.818101 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8afd97ad-c3f3-4b57-b703-727585cbe58e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9mff8\" (UID: \"8afd97ad-c3f3-4b57-b703-727585cbe58e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:57 crc kubenswrapper[4721]: I0128 18:35:57.912689 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.109499 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" event={"ID":"8afd97ad-c3f3-4b57-b703-727585cbe58e","Type":"ContainerStarted","Data":"389eb611358f1b6d7e437c1977359173caae4320bbb4e5f8d7cbb1f2aa8f9095"} Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.109974 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" event={"ID":"8afd97ad-c3f3-4b57-b703-727585cbe58e","Type":"ContainerStarted","Data":"6b16a86abd8099280e381b0bc96ccbcf2da66d9dd5e1b545450bdc0407e1ef67"} Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.123690 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9mff8" podStartSLOduration=88.123666739 podStartE2EDuration="1m28.123666739s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:58.123517274 +0000 UTC m=+123.848822844" watchObservedRunningTime="2026-01-28 18:35:58.123666739 +0000 UTC m=+123.848972299" Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.528640 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.528684 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:35:58 crc kubenswrapper[4721]: E0128 18:35:58.528775 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.528947 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.528982 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:35:58 crc kubenswrapper[4721]: E0128 18:35:58.529309 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:35:58 crc kubenswrapper[4721]: E0128 18:35:58.529463 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:35:58 crc kubenswrapper[4721]: E0128 18:35:58.529606 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.544674 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:11:47.219474776 +0000 UTC Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.544720 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 18:35:58 crc kubenswrapper[4721]: I0128 18:35:58.551903 4721 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 18:36:00 crc kubenswrapper[4721]: I0128 18:36:00.528540 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:00 crc kubenswrapper[4721]: I0128 18:36:00.528656 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:00 crc kubenswrapper[4721]: I0128 18:36:00.529017 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:00 crc kubenswrapper[4721]: E0128 18:36:00.529608 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:00 crc kubenswrapper[4721]: I0128 18:36:00.529634 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:00 crc kubenswrapper[4721]: E0128 18:36:00.529682 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:00 crc kubenswrapper[4721]: E0128 18:36:00.529752 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:00 crc kubenswrapper[4721]: E0128 18:36:00.529878 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:00 crc kubenswrapper[4721]: E0128 18:36:00.984203 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:36:02 crc kubenswrapper[4721]: I0128 18:36:02.528664 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:02 crc kubenswrapper[4721]: I0128 18:36:02.528664 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:02 crc kubenswrapper[4721]: E0128 18:36:02.529872 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:02 crc kubenswrapper[4721]: I0128 18:36:02.528767 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:02 crc kubenswrapper[4721]: E0128 18:36:02.529943 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:02 crc kubenswrapper[4721]: I0128 18:36:02.528747 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:02 crc kubenswrapper[4721]: E0128 18:36:02.529748 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:02 crc kubenswrapper[4721]: E0128 18:36:02.529995 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:04 crc kubenswrapper[4721]: I0128 18:36:04.528599 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:04 crc kubenswrapper[4721]: I0128 18:36:04.528617 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:04 crc kubenswrapper[4721]: I0128 18:36:04.528639 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:04 crc kubenswrapper[4721]: I0128 18:36:04.528695 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:04 crc kubenswrapper[4721]: E0128 18:36:04.528837 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:04 crc kubenswrapper[4721]: E0128 18:36:04.528946 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:04 crc kubenswrapper[4721]: E0128 18:36:04.529001 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:04 crc kubenswrapper[4721]: E0128 18:36:04.529075 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:04 crc kubenswrapper[4721]: I0128 18:36:04.529739 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.131131 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/3.log" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.132998 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerStarted","Data":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.133944 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.135013 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/1.log" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.135319 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/0.log" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.135343 4721 generic.go:334] "Generic (PLEG): container finished" podID="c0a22020-3f34-4895-beec-2ed5d829ea79" containerID="2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab" exitCode=1 Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.135363 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerDied","Data":"2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab"} Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.135382 4721 scope.go:117] "RemoveContainer" containerID="9d26691be7d95ffd613cc84f59222c67d29ab459d1c42ca07d20fe928d7fbc4a" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.135615 4721 scope.go:117] "RemoveContainer" containerID="2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab" Jan 28 18:36:05 crc kubenswrapper[4721]: E0128 18:36:05.135721 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rgqdt_openshift-multus(c0a22020-3f34-4895-beec-2ed5d829ea79)\"" pod="openshift-multus/multus-rgqdt" podUID="c0a22020-3f34-4895-beec-2ed5d829ea79" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.160239 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podStartSLOduration=95.160224072 podStartE2EDuration="1m35.160224072s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:05.160017445 +0000 UTC m=+130.885323025" watchObservedRunningTime="2026-01-28 18:36:05.160224072 +0000 UTC m=+130.885529632" Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.296420 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jqvck"] Jan 28 18:36:05 crc kubenswrapper[4721]: I0128 18:36:05.296525 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:05 crc kubenswrapper[4721]: E0128 18:36:05.296626 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:05 crc kubenswrapper[4721]: E0128 18:36:05.984692 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:36:06 crc kubenswrapper[4721]: I0128 18:36:06.139612 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/1.log" Jan 28 18:36:06 crc kubenswrapper[4721]: I0128 18:36:06.528233 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:06 crc kubenswrapper[4721]: E0128 18:36:06.528378 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:06 crc kubenswrapper[4721]: I0128 18:36:06.528483 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:06 crc kubenswrapper[4721]: I0128 18:36:06.528541 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:06 crc kubenswrapper[4721]: E0128 18:36:06.528642 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:06 crc kubenswrapper[4721]: E0128 18:36:06.528699 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:07 crc kubenswrapper[4721]: I0128 18:36:07.528226 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:07 crc kubenswrapper[4721]: E0128 18:36:07.528354 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:08 crc kubenswrapper[4721]: I0128 18:36:08.528475 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:08 crc kubenswrapper[4721]: I0128 18:36:08.528512 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:08 crc kubenswrapper[4721]: I0128 18:36:08.528480 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:08 crc kubenswrapper[4721]: E0128 18:36:08.528651 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:08 crc kubenswrapper[4721]: E0128 18:36:08.528735 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:08 crc kubenswrapper[4721]: E0128 18:36:08.528852 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:09 crc kubenswrapper[4721]: I0128 18:36:09.528567 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:09 crc kubenswrapper[4721]: E0128 18:36:09.528715 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:10 crc kubenswrapper[4721]: I0128 18:36:10.528268 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:10 crc kubenswrapper[4721]: I0128 18:36:10.528313 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:10 crc kubenswrapper[4721]: I0128 18:36:10.528361 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:10 crc kubenswrapper[4721]: E0128 18:36:10.528476 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:10 crc kubenswrapper[4721]: E0128 18:36:10.528656 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:10 crc kubenswrapper[4721]: E0128 18:36:10.528752 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:10 crc kubenswrapper[4721]: E0128 18:36:10.986029 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:36:11 crc kubenswrapper[4721]: I0128 18:36:11.528069 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:11 crc kubenswrapper[4721]: E0128 18:36:11.528234 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:12 crc kubenswrapper[4721]: I0128 18:36:12.528728 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:12 crc kubenswrapper[4721]: I0128 18:36:12.528795 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:12 crc kubenswrapper[4721]: I0128 18:36:12.528756 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:12 crc kubenswrapper[4721]: E0128 18:36:12.528908 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:12 crc kubenswrapper[4721]: E0128 18:36:12.529035 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:12 crc kubenswrapper[4721]: E0128 18:36:12.529144 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:13 crc kubenswrapper[4721]: I0128 18:36:13.528569 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:13 crc kubenswrapper[4721]: E0128 18:36:13.528771 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:14 crc kubenswrapper[4721]: I0128 18:36:14.528652 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:14 crc kubenswrapper[4721]: I0128 18:36:14.528780 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:14 crc kubenswrapper[4721]: E0128 18:36:14.528820 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:14 crc kubenswrapper[4721]: E0128 18:36:14.528990 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:14 crc kubenswrapper[4721]: I0128 18:36:14.529048 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:14 crc kubenswrapper[4721]: E0128 18:36:14.529128 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:15 crc kubenswrapper[4721]: I0128 18:36:15.528733 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:15 crc kubenswrapper[4721]: E0128 18:36:15.529691 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:15 crc kubenswrapper[4721]: E0128 18:36:15.986472 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:36:16 crc kubenswrapper[4721]: I0128 18:36:16.527886 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:16 crc kubenswrapper[4721]: I0128 18:36:16.527961 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:16 crc kubenswrapper[4721]: I0128 18:36:16.527900 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:16 crc kubenswrapper[4721]: E0128 18:36:16.528476 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:16 crc kubenswrapper[4721]: E0128 18:36:16.528490 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:16 crc kubenswrapper[4721]: E0128 18:36:16.528631 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:17 crc kubenswrapper[4721]: I0128 18:36:17.903515 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:17 crc kubenswrapper[4721]: I0128 18:36:17.903585 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:17 crc kubenswrapper[4721]: E0128 18:36:17.903666 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:17 crc kubenswrapper[4721]: E0128 18:36:17.903762 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:17 crc kubenswrapper[4721]: I0128 18:36:17.903953 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:17 crc kubenswrapper[4721]: E0128 18:36:17.904012 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:18 crc kubenswrapper[4721]: I0128 18:36:18.528579 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:18 crc kubenswrapper[4721]: E0128 18:36:18.528734 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:19 crc kubenswrapper[4721]: I0128 18:36:19.528352 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:19 crc kubenswrapper[4721]: I0128 18:36:19.528391 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:19 crc kubenswrapper[4721]: E0128 18:36:19.528621 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:19 crc kubenswrapper[4721]: I0128 18:36:19.528680 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:19 crc kubenswrapper[4721]: E0128 18:36:19.528772 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:19 crc kubenswrapper[4721]: I0128 18:36:19.528786 4721 scope.go:117] "RemoveContainer" containerID="2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab" Jan 28 18:36:19 crc kubenswrapper[4721]: E0128 18:36:19.528846 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:20 crc kubenswrapper[4721]: I0128 18:36:20.177827 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/1.log" Jan 28 18:36:20 crc kubenswrapper[4721]: I0128 18:36:20.178152 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerStarted","Data":"09078904e276a9f5eb4aafabbe371ff67e22dd1b352aa67825ea2de56709d503"} Jan 28 18:36:20 crc kubenswrapper[4721]: I0128 18:36:20.528559 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:20 crc kubenswrapper[4721]: E0128 18:36:20.528682 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:20 crc kubenswrapper[4721]: E0128 18:36:20.987699 4721 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:36:21 crc kubenswrapper[4721]: I0128 18:36:21.527906 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:21 crc kubenswrapper[4721]: I0128 18:36:21.527944 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:21 crc kubenswrapper[4721]: I0128 18:36:21.527977 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:21 crc kubenswrapper[4721]: E0128 18:36:21.528047 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:21 crc kubenswrapper[4721]: E0128 18:36:21.528118 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:21 crc kubenswrapper[4721]: E0128 18:36:21.528284 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:22 crc kubenswrapper[4721]: I0128 18:36:22.528599 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:22 crc kubenswrapper[4721]: E0128 18:36:22.528720 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:23 crc kubenswrapper[4721]: I0128 18:36:23.528327 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:23 crc kubenswrapper[4721]: E0128 18:36:23.528458 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:23 crc kubenswrapper[4721]: I0128 18:36:23.528620 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:23 crc kubenswrapper[4721]: E0128 18:36:23.528705 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:23 crc kubenswrapper[4721]: I0128 18:36:23.528894 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:23 crc kubenswrapper[4721]: E0128 18:36:23.528953 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:24 crc kubenswrapper[4721]: I0128 18:36:24.528211 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:24 crc kubenswrapper[4721]: E0128 18:36:24.528345 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:36:25 crc kubenswrapper[4721]: I0128 18:36:25.528575 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:25 crc kubenswrapper[4721]: I0128 18:36:25.528590 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:25 crc kubenswrapper[4721]: E0128 18:36:25.529580 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:36:25 crc kubenswrapper[4721]: I0128 18:36:25.529613 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:25 crc kubenswrapper[4721]: E0128 18:36:25.529667 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:36:25 crc kubenswrapper[4721]: E0128 18:36:25.529786 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jqvck" podUID="f3440038-c980-4fb4-be99-235515ec221c" Jan 28 18:36:26 crc kubenswrapper[4721]: I0128 18:36:26.528531 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:26 crc kubenswrapper[4721]: I0128 18:36:26.530265 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 18:36:26 crc kubenswrapper[4721]: I0128 18:36:26.531623 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.528683 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.528683 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.529057 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.531632 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.531917 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.532146 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 18:36:27 crc kubenswrapper[4721]: I0128 18:36:27.532315 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.011438 4721 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.055209 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c9dk6"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.055794 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.056003 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.056394 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g474w"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.056814 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.057100 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.063403 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-74cdf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.065147 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.067880 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.069187 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.080950 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.083999 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8c299"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092490 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092599 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d62b5\" (UniqueName: \"kubernetes.io/projected/b56b3f24-3ef6-4506-ad1f-9498398f474f-kube-api-access-d62b5\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092642 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-serving-cert\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092674 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49cdl\" (UniqueName: \"kubernetes.io/projected/13b4ddde-7262-4219-8aac-fb34883b9608-kube-api-access-49cdl\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092701 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-encryption-config\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092724 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b56b3f24-3ef6-4506-ad1f-9498398f474f-audit-dir\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092747 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-config\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092769 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49007c72-1df2-49db-9bbb-c90ee8207149-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092792 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc28j\" (UniqueName: \"kubernetes.io/projected/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-kube-api-access-fc28j\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092818 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092844 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-config\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092882 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b4ddde-7262-4219-8aac-fb34883b9608-serving-cert\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092915 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49007c72-1df2-49db-9bbb-c90ee8207149-images\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092937 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-client-ca\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092960 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-etcd-client\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.092981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-serving-cert\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093003 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-image-import-ca\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093026 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f867s\" (UniqueName: \"kubernetes.io/projected/49007c72-1df2-49db-9bbb-c90ee8207149-kube-api-access-f867s\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093050 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-client-ca\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093074 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-audit-policies\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093098 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-serving-cert\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093119 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093143 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093191 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-etcd-serving-ca\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093216 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e29aa9b1-ea23-453a-a624-634bf4f8c28b-audit-dir\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093243 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e29aa9b1-ea23-453a-a624-634bf4f8c28b-node-pullsecrets\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093276 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qs9w\" (UniqueName: \"kubernetes.io/projected/e29aa9b1-ea23-453a-a624-634bf4f8c28b-kube-api-access-8qs9w\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093349 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-encryption-config\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093374 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-config\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093400 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-etcd-client\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093425 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-audit\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093450 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49007c72-1df2-49db-9bbb-c90ee8207149-config\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.093693 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.095283 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.096032 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.098232 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.098817 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.099197 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cmtm6"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.099345 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.099588 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.099785 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.100157 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-r4gtf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.100538 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.110907 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-76z8x"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.111605 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ct2hz"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.111643 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.112963 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.118546 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.119159 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.119447 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.119637 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.119992 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120132 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120444 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120548 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120700 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120778 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120886 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120984 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.120713 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121203 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121210 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121245 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121621 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121637 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b42n2"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121826 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.121937 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.122040 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.122110 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.122160 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.122309 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.122435 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.122543 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.131794 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.132241 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.132478 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.132709 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.132891 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.136547 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.136763 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.136929 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.137096 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.137359 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.150868 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.151338 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.152094 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.152239 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.156728 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.156775 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.156953 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.157057 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.157090 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.157189 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.157337 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.157516 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.157898 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158070 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158208 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158302 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158213 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158495 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158255 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158406 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158441 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158916 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.158854 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159100 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159054 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159525 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159632 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159537 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159589 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159955 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.159924 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.160530 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.160692 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.161387 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.162254 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wqwcd"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.162916 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.166444 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.166738 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.166923 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167010 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167215 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167374 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167445 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167603 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167764 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167913 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.167394 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.168668 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.168838 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.168928 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.176741 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtc8t"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.177254 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4x6c5"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.177723 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.178083 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.180314 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.180905 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qrt7r"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.181606 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.180953 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194259 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194309 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-audit-policies\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194354 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-serving-cert\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194390 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194415 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194439 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-etcd-serving-ca\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194563 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e29aa9b1-ea23-453a-a624-634bf4f8c28b-audit-dir\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194598 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e29aa9b1-ea23-453a-a624-634bf4f8c28b-node-pullsecrets\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194712 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qs9w\" (UniqueName: \"kubernetes.io/projected/e29aa9b1-ea23-453a-a624-634bf4f8c28b-kube-api-access-8qs9w\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194793 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-encryption-config\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194849 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-etcd-client\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194887 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-config\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194916 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-audit\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.194963 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49007c72-1df2-49db-9bbb-c90ee8207149-config\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.195039 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-serving-cert\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.195780 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.198577 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.199497 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-etcd-serving-ca\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.199566 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e29aa9b1-ea23-453a-a624-634bf4f8c28b-audit-dir\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.199607 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e29aa9b1-ea23-453a-a624-634bf4f8c28b-node-pullsecrets\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.200565 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-config\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.202418 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.203272 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d62b5\" (UniqueName: \"kubernetes.io/projected/b56b3f24-3ef6-4506-ad1f-9498398f474f-kube-api-access-d62b5\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.204919 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-audit\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.205364 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b56b3f24-3ef6-4506-ad1f-9498398f474f-audit-policies\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.205313 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49cdl\" (UniqueName: \"kubernetes.io/projected/13b4ddde-7262-4219-8aac-fb34883b9608-kube-api-access-49cdl\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.206932 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-config\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.206995 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49007c72-1df2-49db-9bbb-c90ee8207149-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.207189 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc28j\" (UniqueName: \"kubernetes.io/projected/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-kube-api-access-fc28j\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.210657 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.211819 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.217981 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-encryption-config\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.218742 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b56b3f24-3ef6-4506-ad1f-9498398f474f-audit-dir\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.218656 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-config\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.221419 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-config\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.221481 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.221544 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b4ddde-7262-4219-8aac-fb34883b9608-serving-cert\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.221757 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49007c72-1df2-49db-9bbb-c90ee8207149-images\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.225578 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.229070 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.227719 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49007c72-1df2-49db-9bbb-c90ee8207149-images\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.270369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-serving-cert\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.270424 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.271032 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-config\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.271302 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.271682 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49007c72-1df2-49db-9bbb-c90ee8207149-config\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.272358 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.272497 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.272693 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.272703 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-encryption-config\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273037 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.272717 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-client-ca\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273138 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-etcd-client\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273190 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273205 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-serving-cert\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273234 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-image-import-ca\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273263 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f867s\" (UniqueName: \"kubernetes.io/projected/49007c72-1df2-49db-9bbb-c90ee8207149-kube-api-access-f867s\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273292 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-client-ca\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273383 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273434 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-client-ca\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.273478 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.274067 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.274348 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.274435 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-client-ca\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.275072 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.275361 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-image-import-ca\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.276749 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-etcd-client\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.276842 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.277768 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.278047 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.278290 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.278774 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b4ddde-7262-4219-8aac-fb34883b9608-serving-cert\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.277846 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.279805 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-serving-cert\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.280127 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.280134 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-encryption-config\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.280408 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.280808 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e29aa9b1-ea23-453a-a624-634bf4f8c28b-etcd-client\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.280871 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b56b3f24-3ef6-4506-ad1f-9498398f474f-audit-dir\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.282755 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.292110 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.285071 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.289451 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49007c72-1df2-49db-9bbb-c90ee8207149-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.285508 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.292712 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.291389 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.292809 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.293313 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.293321 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.298878 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.299413 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.301293 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.301944 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.302317 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.302393 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.303674 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.305493 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.305503 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.307025 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.324429 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.324486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b56b3f24-3ef6-4506-ad1f-9498398f474f-serving-cert\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.326459 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.326652 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.329399 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.331406 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.332390 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.332738 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.333381 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e29aa9b1-ea23-453a-a624-634bf4f8c28b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.333642 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.347034 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.347367 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.348787 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.349856 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g474w"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.350611 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c9dk6"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.353230 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.354522 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s7d98"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.356776 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.360749 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.362984 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.363521 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cmtm6"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.365490 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-76z8x"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.367257 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.369217 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.370329 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.371130 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-74cdf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374474 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7b9dz"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374514 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6cpq\" (UniqueName: \"kubernetes.io/projected/6e9fcebd-ee55-462a-ab16-b16840c83b25-kube-api-access-x6cpq\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374567 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w55rz\" (UniqueName: \"kubernetes.io/projected/597a1c26-12f4-401b-bd2b-1842722282f2-kube-api-access-w55rz\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374593 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-serving-cert\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374617 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597a1c26-12f4-401b-bd2b-1842722282f2-config\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374657 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374683 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-config\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e9fcebd-ee55-462a-ab16-b16840c83b25-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374746 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nspb2\" (UniqueName: \"kubernetes.io/projected/5081acb0-d928-4278-8d1f-207f7c3c3289-kube-api-access-nspb2\") pod \"dns-operator-744455d44c-4x6c5\" (UID: \"5081acb0-d928-4278-8d1f-207f7c3c3289\") " pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374776 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts578\" (UniqueName: \"kubernetes.io/projected/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-kube-api-access-ts578\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374801 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-service-ca-bundle\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374825 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e9fcebd-ee55-462a-ab16-b16840c83b25-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.374926 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e9fcebd-ee55-462a-ab16-b16840c83b25-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.375049 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/597a1c26-12f4-401b-bd2b-1842722282f2-machine-approver-tls\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.375136 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5081acb0-d928-4278-8d1f-207f7c3c3289-metrics-tls\") pod \"dns-operator-744455d44c-4x6c5\" (UID: \"5081acb0-d928-4278-8d1f-207f7c3c3289\") " pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.375199 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/597a1c26-12f4-401b-bd2b-1842722282f2-auth-proxy-config\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.375347 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.375582 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.376257 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.377275 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2k27q"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.377739 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.377887 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.378312 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qp2vg"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.379432 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.379602 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ct2hz"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.380306 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.381295 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b42n2"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.382279 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.383243 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.384274 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.385266 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l59vq"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.386393 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.386865 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.387337 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.388158 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qrt7r"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.389758 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.391217 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-r4gtf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.392152 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtc8t"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.393162 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4x6c5"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.394097 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.396390 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.397519 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.398558 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.398839 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7b9dz"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.399737 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.400857 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.402007 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.403471 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.404570 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.405916 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2k27q"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.406924 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-x9nr7"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.407824 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.407988 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.409088 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s7d98"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.410225 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qp2vg"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.411259 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xtdkt"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.412873 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hjbqw"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.413361 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.413714 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.413743 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.417532 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.419158 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.423828 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xtdkt"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.427500 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l59vq"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.431094 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hjbqw"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.438046 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.460786 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476088 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597a1c26-12f4-401b-bd2b-1842722282f2-config\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476125 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-serving-cert\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476235 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-config\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476260 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e9fcebd-ee55-462a-ab16-b16840c83b25-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476278 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nspb2\" (UniqueName: \"kubernetes.io/projected/5081acb0-d928-4278-8d1f-207f7c3c3289-kube-api-access-nspb2\") pod \"dns-operator-744455d44c-4x6c5\" (UID: \"5081acb0-d928-4278-8d1f-207f7c3c3289\") " pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476304 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts578\" (UniqueName: \"kubernetes.io/projected/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-kube-api-access-ts578\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476327 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-service-ca-bundle\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476343 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e9fcebd-ee55-462a-ab16-b16840c83b25-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e9fcebd-ee55-462a-ab16-b16840c83b25-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476400 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/597a1c26-12f4-401b-bd2b-1842722282f2-machine-approver-tls\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476423 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5081acb0-d928-4278-8d1f-207f7c3c3289-metrics-tls\") pod \"dns-operator-744455d44c-4x6c5\" (UID: \"5081acb0-d928-4278-8d1f-207f7c3c3289\") " pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476453 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/597a1c26-12f4-401b-bd2b-1842722282f2-auth-proxy-config\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476474 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6cpq\" (UniqueName: \"kubernetes.io/projected/6e9fcebd-ee55-462a-ab16-b16840c83b25-kube-api-access-x6cpq\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476494 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w55rz\" (UniqueName: \"kubernetes.io/projected/597a1c26-12f4-401b-bd2b-1842722282f2-kube-api-access-w55rz\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.476915 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597a1c26-12f4-401b-bd2b-1842722282f2-config\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.478659 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.478707 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-service-ca-bundle\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.478766 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/597a1c26-12f4-401b-bd2b-1842722282f2-auth-proxy-config\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.479121 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-config\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.479777 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e9fcebd-ee55-462a-ab16-b16840c83b25-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.479784 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.481454 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e9fcebd-ee55-462a-ab16-b16840c83b25-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.481530 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/597a1c26-12f4-401b-bd2b-1842722282f2-machine-approver-tls\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.482486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5081acb0-d928-4278-8d1f-207f7c3c3289-metrics-tls\") pod \"dns-operator-744455d44c-4x6c5\" (UID: \"5081acb0-d928-4278-8d1f-207f7c3c3289\") " pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.483047 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-serving-cert\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.499158 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.520322 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.539508 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.559101 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.578538 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.599480 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.618728 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.639310 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.658616 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.678803 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.698808 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.735296 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qs9w\" (UniqueName: \"kubernetes.io/projected/e29aa9b1-ea23-453a-a624-634bf4f8c28b-kube-api-access-8qs9w\") pod \"apiserver-76f77b778f-74cdf\" (UID: \"e29aa9b1-ea23-453a-a624-634bf4f8c28b\") " pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.754289 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d62b5\" (UniqueName: \"kubernetes.io/projected/b56b3f24-3ef6-4506-ad1f-9498398f474f-kube-api-access-d62b5\") pod \"apiserver-7bbb656c7d-8kxsr\" (UID: \"b56b3f24-3ef6-4506-ad1f-9498398f474f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.777870 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49cdl\" (UniqueName: \"kubernetes.io/projected/13b4ddde-7262-4219-8aac-fb34883b9608-kube-api-access-49cdl\") pod \"controller-manager-879f6c89f-c9dk6\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.778216 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.778728 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.798616 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.825424 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.840197 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.860018 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.879144 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.940163 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.941075 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc28j\" (UniqueName: \"kubernetes.io/projected/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-kube-api-access-fc28j\") pod \"route-controller-manager-6576b87f9c-6n8x8\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.964645 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-74cdf"] Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.978106 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 18:36:28 crc kubenswrapper[4721]: I0128 18:36:28.997932 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.000099 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.019687 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.021825 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.058230 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f867s\" (UniqueName: \"kubernetes.io/projected/49007c72-1df2-49db-9bbb-c90ee8207149-kube-api-access-f867s\") pod \"machine-api-operator-5694c8668f-g474w\" (UID: \"49007c72-1df2-49db-9bbb-c90ee8207149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.059847 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.068857 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.079112 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.099413 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.099924 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.119882 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.140210 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.158854 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.178323 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.199095 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.217664 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c9dk6"] Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.221792 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.243934 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.244918 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" event={"ID":"e29aa9b1-ea23-453a-a624-634bf4f8c28b","Type":"ContainerStarted","Data":"8e576026adcd774a440c0019547584a6700e1a36af96f473d5375a65ce783fdd"} Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.244955 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" event={"ID":"e29aa9b1-ea23-453a-a624-634bf4f8c28b","Type":"ContainerStarted","Data":"6e98183112f35c34ac802ad40452744961f344bbf24536aa9c6f0c4655310017"} Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.263149 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.265778 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr"] Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.282335 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.297604 4721 request.go:700] Waited for 1.003723698s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0 Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.299888 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.316419 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g474w"] Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.320956 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: W0128 18:36:29.334825 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49007c72_1df2_49db_9bbb_c90ee8207149.slice/crio-8609d891b72139fdc1c64c002d704315cd1e454ab0d42bf224a17e2ba7f8bfe9 WatchSource:0}: Error finding container 8609d891b72139fdc1c64c002d704315cd1e454ab0d42bf224a17e2ba7f8bfe9: Status 404 returned error can't find the container with id 8609d891b72139fdc1c64c002d704315cd1e454ab0d42bf224a17e2ba7f8bfe9 Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.339135 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.359751 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.378419 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.384495 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8"] Jan 28 18:36:29 crc kubenswrapper[4721]: W0128 18:36:29.396822 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ac5f19_3f57_4e3a_8f53_dc493fcceea1.slice/crio-236d9b13a4c4bd07c6ea135f049bb2bd9433a0e6f3dc793c5af396e081007866 WatchSource:0}: Error finding container 236d9b13a4c4bd07c6ea135f049bb2bd9433a0e6f3dc793c5af396e081007866: Status 404 returned error can't find the container with id 236d9b13a4c4bd07c6ea135f049bb2bd9433a0e6f3dc793c5af396e081007866 Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.398366 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.418288 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.438559 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.460833 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.479253 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.499947 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.518397 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.540265 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.558913 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.578803 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.598632 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.619943 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.638851 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.658494 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.679479 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.699375 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.720095 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.738336 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.759143 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.778988 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.797973 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.819470 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.839157 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.858667 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.879294 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.898951 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.923683 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.938789 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.958162 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.978583 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 18:36:29 crc kubenswrapper[4721]: I0128 18:36:29.999067 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.018574 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.038722 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.059142 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.078773 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.099012 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.118522 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.138782 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.160632 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.179116 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.199749 4721 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.250542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts578\" (UniqueName: \"kubernetes.io/projected/a1242828-5fbb-4f54-a17b-cb26ab9dbec8-kube-api-access-ts578\") pod \"authentication-operator-69f744f599-r4gtf\" (UID: \"a1242828-5fbb-4f54-a17b-cb26ab9dbec8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.254603 4721 generic.go:334] "Generic (PLEG): container finished" podID="b56b3f24-3ef6-4506-ad1f-9498398f474f" containerID="b3f85f16dfcb525b2eceaea6cc78086339df296f26b64f5dc9a5719f6f49275a" exitCode=0 Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.254671 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" event={"ID":"b56b3f24-3ef6-4506-ad1f-9498398f474f","Type":"ContainerDied","Data":"b3f85f16dfcb525b2eceaea6cc78086339df296f26b64f5dc9a5719f6f49275a"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.254703 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" event={"ID":"b56b3f24-3ef6-4506-ad1f-9498398f474f","Type":"ContainerStarted","Data":"5d63ea0cba63b7bff58c45ff208d46ef076e92da199ec25e4c3c3b6f3067a80a"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.255379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w55rz\" (UniqueName: \"kubernetes.io/projected/597a1c26-12f4-401b-bd2b-1842722282f2-kube-api-access-w55rz\") pod \"machine-approver-56656f9798-8c299\" (UID: \"597a1c26-12f4-401b-bd2b-1842722282f2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.266162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" event={"ID":"13b4ddde-7262-4219-8aac-fb34883b9608","Type":"ContainerStarted","Data":"00a75734892b0f995f4ecca4e1c2197943c9c19a58ad893ce00a141221eb8b75"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.266219 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" event={"ID":"13b4ddde-7262-4219-8aac-fb34883b9608","Type":"ContainerStarted","Data":"dfad2cedaf51c86061bb343f8931c6d0ac0f135b0212ccec479018e53202c572"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.266784 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.269271 4721 generic.go:334] "Generic (PLEG): container finished" podID="e29aa9b1-ea23-453a-a624-634bf4f8c28b" containerID="8e576026adcd774a440c0019547584a6700e1a36af96f473d5375a65ce783fdd" exitCode=0 Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.269351 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" event={"ID":"e29aa9b1-ea23-453a-a624-634bf4f8c28b","Type":"ContainerDied","Data":"8e576026adcd774a440c0019547584a6700e1a36af96f473d5375a65ce783fdd"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.269377 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" event={"ID":"e29aa9b1-ea23-453a-a624-634bf4f8c28b","Type":"ContainerStarted","Data":"23f3d71d4a49eec83db88186aabd7bbfa814234e3f38d06bd4736a9464d3b4a9"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.269390 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" event={"ID":"e29aa9b1-ea23-453a-a624-634bf4f8c28b","Type":"ContainerStarted","Data":"aca87951810c8367893fb75ca2042fe055dbda9c240e9810df33f3aeed170ed2"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.271369 4721 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c9dk6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.271417 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" podUID="13b4ddde-7262-4219-8aac-fb34883b9608" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.274464 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" event={"ID":"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1","Type":"ContainerStarted","Data":"c1a0dc6e5b5a7283b3189a83a4d6ce388eeef0edc349858942b194400384cfd4"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.274519 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" event={"ID":"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1","Type":"ContainerStarted","Data":"236d9b13a4c4bd07c6ea135f049bb2bd9433a0e6f3dc793c5af396e081007866"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.275987 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.276162 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nspb2\" (UniqueName: \"kubernetes.io/projected/5081acb0-d928-4278-8d1f-207f7c3c3289-kube-api-access-nspb2\") pod \"dns-operator-744455d44c-4x6c5\" (UID: \"5081acb0-d928-4278-8d1f-207f7c3c3289\") " pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.276498 4721 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6n8x8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.276549 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" podUID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.277586 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" event={"ID":"49007c72-1df2-49db-9bbb-c90ee8207149","Type":"ContainerStarted","Data":"f6f81874d8043c471a3348dd680dba59b45a1354728e938a813bf7d96c6675ef"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.277653 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" event={"ID":"49007c72-1df2-49db-9bbb-c90ee8207149","Type":"ContainerStarted","Data":"c5ee873328d38375c9935404b7f5cd7f0662a290572903eb46c589e9cead1d3f"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.277669 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" event={"ID":"49007c72-1df2-49db-9bbb-c90ee8207149","Type":"ContainerStarted","Data":"8609d891b72139fdc1c64c002d704315cd1e454ab0d42bf224a17e2ba7f8bfe9"} Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.296321 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e9fcebd-ee55-462a-ab16-b16840c83b25-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.297753 4721 request.go:700] Waited for 1.820667093s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.318993 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6cpq\" (UniqueName: \"kubernetes.io/projected/6e9fcebd-ee55-462a-ab16-b16840c83b25-kube-api-access-x6cpq\") pod \"cluster-image-registry-operator-dc59b4c8b-ms7xm\" (UID: \"6e9fcebd-ee55-462a-ab16-b16840c83b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402409 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbv5f\" (UniqueName: \"kubernetes.io/projected/8508a38e-342a-4dab-956c-cc847d18e6bc-kube-api-access-vbv5f\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402469 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9260fa7e-9c98-4777-9625-3ac5501c883c-service-ca-bundle\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402497 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e7be82c-acf6-4120-8f43-221b6ef958c8-serving-cert\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402517 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-trusted-ca-bundle\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402536 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402557 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5lx\" (UniqueName: \"kubernetes.io/projected/26a0a4f9-321f-4196-88ce-888b82380eb6-kube-api-access-9h5lx\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402685 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wf27\" (UniqueName: \"kubernetes.io/projected/52b4f91f-7c7b-401a-82b0-8907f6880677-kube-api-access-8wf27\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402745 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402779 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0ecfef-29a6-474c-a266-ed16b5548797-serving-cert\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402808 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-config\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402839 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-ca\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402885 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402918 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-oauth-serving-cert\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.402988 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403019 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403093 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47acc23c-4409-4e15-a231-5c095917842d-serving-cert\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403124 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-certificates\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403152 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-service-ca\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-trusted-ca\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403269 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-metrics-certs\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403304 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403332 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpq2\" (UniqueName: \"kubernetes.io/projected/b30c15c2-ac57-4e56-a55b-5b9de02e097f-kube-api-access-2cpq2\") pod \"downloads-7954f5f757-cmtm6\" (UID: \"b30c15c2-ac57-4e56-a55b-5b9de02e097f\") " pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403364 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq6zs\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-kube-api-access-kq6zs\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403415 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-bound-sa-token\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403446 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/995bfe33-c190-48b3-bb6c-9c6cb81d8359-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403529 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47acc23c-4409-4e15-a231-5c095917842d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403621 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-dir\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403772 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-service-ca\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403907 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.403961 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-policies\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404550 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8508a38e-342a-4dab-956c-cc847d18e6bc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404627 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-serving-cert\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404661 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvq8j\" (UniqueName: \"kubernetes.io/projected/9260fa7e-9c98-4777-9625-3ac5501c883c-kube-api-access-kvq8j\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404689 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592908e7-063e-4a05-8bfa-19d925c28be7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404716 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bt8\" (UniqueName: \"kubernetes.io/projected/47acc23c-4409-4e15-a231-5c095917842d-kube-api-access-48bt8\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404759 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-default-certificate\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404830 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404855 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/995bfe33-c190-48b3-bb6c-9c6cb81d8359-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404875 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmwb\" (UniqueName: \"kubernetes.io/projected/6e7be82c-acf6-4120-8f43-221b6ef958c8-kube-api-access-wmmwb\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.404903 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-stats-auth\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405377 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9txr8\" (UniqueName: \"kubernetes.io/projected/592908e7-063e-4a05-8bfa-19d925c28be7-kube-api-access-9txr8\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405418 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhc9s\" (UniqueName: \"kubernetes.io/projected/fd0ecfef-29a6-474c-a266-ed16b5548797-kube-api-access-mhc9s\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405447 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8508a38e-342a-4dab-956c-cc847d18e6bc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405475 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405536 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405559 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-client\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405681 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-oauth-config\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405741 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405803 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995bfe33-c190-48b3-bb6c-9c6cb81d8359-config\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405839 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7be82c-acf6-4120-8f43-221b6ef958c8-config\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.405870 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e7be82c-acf6-4120-8f43-221b6ef958c8-trusted-ca\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.406241 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.406312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592908e7-063e-4a05-8bfa-19d925c28be7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.406383 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.406570 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-tls\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.406616 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-console-config\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.406674 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.407014 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:30.906999143 +0000 UTC m=+156.632304703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.407529 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.407586 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.407612 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.423676 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.472246 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.508181 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.509153 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.009123707 +0000 UTC m=+156.734429277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.508967 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.511920 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-bound-sa-token\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.511966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-webhook-cert\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.511989 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-signing-key\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512012 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4657c92e-5f11-45b4-bf64-91d04c42ace3-images\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512032 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8a50493-8d3e-4391-ad78-0bd93ce8157e-config\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512131 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-dir\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512645 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-dir\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512693 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-service-ca\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512731 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512750 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8508a38e-342a-4dab-956c-cc847d18e6bc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512769 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512790 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48bt8\" (UniqueName: \"kubernetes.io/projected/47acc23c-4409-4e15-a231-5c095917842d-kube-api-access-48bt8\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512806 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvgf\" (UniqueName: \"kubernetes.io/projected/f8ec1447-58ab-4c73-bc49-0da5b940c6cf-kube-api-access-vpvgf\") pod \"cluster-samples-operator-665b6dd947-4qhmh\" (UID: \"f8ec1447-58ab-4c73-bc49-0da5b940c6cf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512832 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-default-certificate\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512866 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/995bfe33-c190-48b3-bb6c-9c6cb81d8359-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512892 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-stats-auth\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512909 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhc9s\" (UniqueName: \"kubernetes.io/projected/fd0ecfef-29a6-474c-a266-ed16b5548797-kube-api-access-mhc9s\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512963 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8508a38e-342a-4dab-956c-cc847d18e6bc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.512982 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513005 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db061c0-7df0-4ca1-a388-c69dd9344b9c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513048 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513073 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-client\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513093 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-oauth-config\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513111 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995bfe33-c190-48b3-bb6c-9c6cb81d8359-config\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfxh\" (UniqueName: \"kubernetes.io/projected/a776531f-ebd1-491e-b6d7-378a11aad9d8-kube-api-access-mzfxh\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513156 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mrq\" (UniqueName: \"kubernetes.io/projected/d0a361ba-f31a-477c-a532-136ebf0b025b-kube-api-access-q9mrq\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513191 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-console-config\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513210 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-mountpoint-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513227 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7g7\" (UniqueName: \"kubernetes.io/projected/4657c92e-5f11-45b4-bf64-91d04c42ace3-kube-api-access-xf7g7\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513298 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513327 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8355e616-674b-4bc2-a727-76609df63630-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jrjx5\" (UID: \"8355e616-674b-4bc2-a727-76609df63630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513342 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-trusted-ca\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513358 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db74784c-afbc-482a-8e2d-18c5bb898a9b-secret-volume\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513375 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/519e974b-b132-4b21-a47d-759e40bdbc72-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513401 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9260fa7e-9c98-4777-9625-3ac5501c883c-service-ca-bundle\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513418 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndcsj\" (UniqueName: \"kubernetes.io/projected/e6508511-52da-41f5-a939-98342be6441e-kube-api-access-ndcsj\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513434 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn9gb\" (UniqueName: \"kubernetes.io/projected/86346561-5414-4c01-a202-6964f19b52db-kube-api-access-wn9gb\") pod \"ingress-canary-hjbqw\" (UID: \"86346561-5414-4c01-a202-6964f19b52db\") " pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513449 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8ec1447-58ab-4c73-bc49-0da5b940c6cf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4qhmh\" (UID: \"f8ec1447-58ab-4c73-bc49-0da5b940c6cf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513466 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wf27\" (UniqueName: \"kubernetes.io/projected/52b4f91f-7c7b-401a-82b0-8907f6880677-kube-api-access-8wf27\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0ecfef-29a6-474c-a266-ed16b5548797-serving-cert\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513525 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-config\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513541 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7t5l\" (UniqueName: \"kubernetes.io/projected/6e54e2fb-d821-4c19-a076-c47b738d1a48-kube-api-access-h7t5l\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513555 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-registration-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513573 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-oauth-serving-cert\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513590 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbdrw\" (UniqueName: \"kubernetes.io/projected/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-kube-api-access-mbdrw\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513608 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513625 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2k9d\" (UniqueName: \"kubernetes.io/projected/a75256c5-8c48-43f3-9faf-15d661a26980-kube-api-access-v2k9d\") pod \"migrator-59844c95c7-gdtgs\" (UID: \"a75256c5-8c48-43f3-9faf-15d661a26980\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513659 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/519e974b-b132-4b21-a47d-759e40bdbc72-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513678 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nch6\" (UniqueName: \"kubernetes.io/projected/8355e616-674b-4bc2-a727-76609df63630-kube-api-access-2nch6\") pod \"control-plane-machine-set-operator-78cbb6b69f-jrjx5\" (UID: \"8355e616-674b-4bc2-a727-76609df63630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513702 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6508511-52da-41f5-a939-98342be6441e-certs\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513730 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-trusted-ca\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4657c92e-5f11-45b4-bf64-91d04c42ace3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513764 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-plugins-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513782 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519e974b-b132-4b21-a47d-759e40bdbc72-config\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513844 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-metrics-certs\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513867 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513911 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq6zs\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-kube-api-access-kq6zs\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513944 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77dd4c6e-7dd3-4378-be3f-74f0c43fb371-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s7d98\" (UID: \"77dd4c6e-7dd3-4378-be3f-74f0c43fb371\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513968 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/995bfe33-c190-48b3-bb6c-9c6cb81d8359-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.513992 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnng\" (UniqueName: \"kubernetes.io/projected/0db061c0-7df0-4ca1-a388-c69dd9344b9c-kube-api-access-fsnng\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514029 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47acc23c-4409-4e15-a231-5c095917842d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514052 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9n7\" (UniqueName: \"kubernetes.io/projected/77dd4c6e-7dd3-4378-be3f-74f0c43fb371-kube-api-access-7x9n7\") pod \"multus-admission-controller-857f4d67dd-s7d98\" (UID: \"77dd4c6e-7dd3-4378-be3f-74f0c43fb371\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514083 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdkn\" (UniqueName: \"kubernetes.io/projected/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-kube-api-access-lvdkn\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514237 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58cf121-cb7e-4eb7-9634-b72173bfa945-config-volume\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514281 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-policies\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514334 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d0a361ba-f31a-477c-a532-136ebf0b025b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514358 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-csi-data-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514382 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-serving-cert\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514406 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514433 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvq8j\" (UniqueName: \"kubernetes.io/projected/9260fa7e-9c98-4777-9625-3ac5501c883c-kube-api-access-kvq8j\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514455 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592908e7-063e-4a05-8bfa-19d925c28be7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514477 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-signing-cabundle\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514504 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmwb\" (UniqueName: \"kubernetes.io/projected/6e7be82c-acf6-4120-8f43-221b6ef958c8-kube-api-access-wmmwb\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514526 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzs64\" (UniqueName: \"kubernetes.io/projected/d8a50493-8d3e-4391-ad78-0bd93ce8157e-kube-api-access-mzs64\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514548 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9txr8\" (UniqueName: \"kubernetes.io/projected/592908e7-063e-4a05-8bfa-19d925c28be7-kube-api-access-9txr8\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514570 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-socket-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514591 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-tmpfs\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514615 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d64p\" (UniqueName: \"kubernetes.io/projected/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-kube-api-access-5d64p\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514652 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-apiservice-cert\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514683 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514705 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7be82c-acf6-4120-8f43-221b6ef958c8-config\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514725 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e7be82c-acf6-4120-8f43-221b6ef958c8-trusted-ca\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514746 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db061c0-7df0-4ca1-a388-c69dd9344b9c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514784 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514806 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592908e7-063e-4a05-8bfa-19d925c28be7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e54e2fb-d821-4c19-a076-c47b738d1a48-proxy-tls\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514848 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4657c92e-5f11-45b4-bf64-91d04c42ace3-proxy-tls\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514869 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-srv-cert\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514905 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514934 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514941 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8508a38e-342a-4dab-956c-cc847d18e6bc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.514956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-tls\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516353 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-metrics-tls\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516387 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bkwq\" (UniqueName: \"kubernetes.io/projected/36392dfb-bda3-46da-b8ba-ebc27ab22e00-kube-api-access-8bkwq\") pod \"package-server-manager-789f6589d5-9lrf6\" (UID: \"36392dfb-bda3-46da-b8ba-ebc27ab22e00\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516416 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-profile-collector-cert\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516476 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8a50493-8d3e-4391-ad78-0bd93ce8157e-serving-cert\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516525 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x5w6\" (UniqueName: \"kubernetes.io/projected/12a4be20-2607-4502-b20d-b579c9987b57-kube-api-access-9x5w6\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.516996 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db74784c-afbc-482a-8e2d-18c5bb898a9b-config-volume\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517070 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58cf121-cb7e-4eb7-9634-b72173bfa945-metrics-tls\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517104 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqwfz\" (UniqueName: \"kubernetes.io/projected/a58cf121-cb7e-4eb7-9634-b72173bfa945-kube-api-access-xqwfz\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517130 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7q8\" (UniqueName: \"kubernetes.io/projected/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-kube-api-access-rk7q8\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517181 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517211 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517268 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbv5f\" (UniqueName: \"kubernetes.io/projected/8508a38e-342a-4dab-956c-cc847d18e6bc-kube-api-access-vbv5f\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517295 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36392dfb-bda3-46da-b8ba-ebc27ab22e00-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9lrf6\" (UID: \"36392dfb-bda3-46da-b8ba-ebc27ab22e00\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517345 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e7be82c-acf6-4120-8f43-221b6ef958c8-serving-cert\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517367 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-service-ca\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517454 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86346561-5414-4c01-a202-6964f19b52db-cert\") pod \"ingress-canary-hjbqw\" (UID: \"86346561-5414-4c01-a202-6964f19b52db\") " pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517487 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-trusted-ca-bundle\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517514 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5lx\" (UniqueName: \"kubernetes.io/projected/26a0a4f9-321f-4196-88ce-888b82380eb6-kube-api-access-9h5lx\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517543 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcv5n\" (UniqueName: \"kubernetes.io/projected/db74784c-afbc-482a-8e2d-18c5bb898a9b-kube-api-access-dcv5n\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517569 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517597 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-ca\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517640 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517669 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517703 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47acc23c-4409-4e15-a231-5c095917842d-serving-cert\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517734 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-certificates\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517758 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-service-ca\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517785 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6508511-52da-41f5-a939-98342be6441e-node-bootstrap-token\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517810 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e54e2fb-d821-4c19-a076-c47b738d1a48-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517836 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d0a361ba-f31a-477c-a532-136ebf0b025b-srv-cert\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.517904 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpq2\" (UniqueName: \"kubernetes.io/projected/b30c15c2-ac57-4e56-a55b-5b9de02e097f-kube-api-access-2cpq2\") pod \"downloads-7954f5f757-cmtm6\" (UID: \"b30c15c2-ac57-4e56-a55b-5b9de02e097f\") " pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.518151 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.518637 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.520231 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.020212246 +0000 UTC m=+156.745517806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.520475 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47acc23c-4409-4e15-a231-5c095917842d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.520936 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.521355 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8508a38e-342a-4dab-956c-cc847d18e6bc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.521584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-config\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.522046 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9260fa7e-9c98-4777-9625-3ac5501c883c-service-ca-bundle\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.522768 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-serving-cert\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.523487 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.523515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.523646 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-policies\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.524371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592908e7-063e-4a05-8bfa-19d925c28be7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.524778 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.525448 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-trusted-ca-bundle\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.525542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-service-ca\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.525597 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-ca\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.525686 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995bfe33-c190-48b3-bb6c-9c6cb81d8359-config\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.526938 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.528513 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-default-certificate\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.529804 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd0ecfef-29a6-474c-a266-ed16b5548797-etcd-client\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.529888 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-oauth-config\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.530825 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.531030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e7be82c-acf6-4120-8f43-221b6ef958c8-config\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.531943 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e7be82c-acf6-4120-8f43-221b6ef958c8-trusted-ca\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.532444 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-trusted-ca\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.532507 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-oauth-serving-cert\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.533088 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592908e7-063e-4a05-8bfa-19d925c28be7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.533812 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/995bfe33-c190-48b3-bb6c-9c6cb81d8359-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.533871 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.534673 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.535443 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.536357 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-metrics-certs\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.536314 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-tls\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.536753 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-certificates\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.537614 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-console-config\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.538135 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.539648 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0ecfef-29a6-474c-a266-ed16b5548797-serving-cert\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.540464 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.541211 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47acc23c-4409-4e15-a231-5c095917842d-serving-cert\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.541378 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9260fa7e-9c98-4777-9625-3ac5501c883c-stats-auth\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.541758 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.541873 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e7be82c-acf6-4120-8f43-221b6ef958c8-serving-cert\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.544496 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.553997 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-bound-sa-token\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.596100 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmwb\" (UniqueName: \"kubernetes.io/projected/6e7be82c-acf6-4120-8f43-221b6ef958c8-kube-api-access-wmmwb\") pod \"console-operator-58897d9998-76z8x\" (UID: \"6e7be82c-acf6-4120-8f43-221b6ef958c8\") " pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.612138 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq6zs\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-kube-api-access-kq6zs\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619250 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.619511 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.119480634 +0000 UTC m=+156.844786194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619590 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-mountpoint-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619676 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7g7\" (UniqueName: \"kubernetes.io/projected/4657c92e-5f11-45b4-bf64-91d04c42ace3-kube-api-access-xf7g7\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/519e974b-b132-4b21-a47d-759e40bdbc72-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8355e616-674b-4bc2-a727-76609df63630-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jrjx5\" (UID: \"8355e616-674b-4bc2-a727-76609df63630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619828 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-trusted-ca\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619844 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db74784c-afbc-482a-8e2d-18c5bb898a9b-secret-volume\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619883 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn9gb\" (UniqueName: \"kubernetes.io/projected/86346561-5414-4c01-a202-6964f19b52db-kube-api-access-wn9gb\") pod \"ingress-canary-hjbqw\" (UID: \"86346561-5414-4c01-a202-6964f19b52db\") " pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620486 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndcsj\" (UniqueName: \"kubernetes.io/projected/e6508511-52da-41f5-a939-98342be6441e-kube-api-access-ndcsj\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620512 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8ec1447-58ab-4c73-bc49-0da5b940c6cf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4qhmh\" (UID: \"f8ec1447-58ab-4c73-bc49-0da5b940c6cf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620579 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7t5l\" (UniqueName: \"kubernetes.io/projected/6e54e2fb-d821-4c19-a076-c47b738d1a48-kube-api-access-h7t5l\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620603 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-registration-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620657 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdrw\" (UniqueName: \"kubernetes.io/projected/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-kube-api-access-mbdrw\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620726 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2k9d\" (UniqueName: \"kubernetes.io/projected/a75256c5-8c48-43f3-9faf-15d661a26980-kube-api-access-v2k9d\") pod \"migrator-59844c95c7-gdtgs\" (UID: \"a75256c5-8c48-43f3-9faf-15d661a26980\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620758 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/519e974b-b132-4b21-a47d-759e40bdbc72-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620780 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nch6\" (UniqueName: \"kubernetes.io/projected/8355e616-674b-4bc2-a727-76609df63630-kube-api-access-2nch6\") pod \"control-plane-machine-set-operator-78cbb6b69f-jrjx5\" (UID: \"8355e616-674b-4bc2-a727-76609df63630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6508511-52da-41f5-a939-98342be6441e-certs\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620818 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4657c92e-5f11-45b4-bf64-91d04c42ace3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620832 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-plugins-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620846 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519e974b-b132-4b21-a47d-759e40bdbc72-config\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620868 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77dd4c6e-7dd3-4378-be3f-74f0c43fb371-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s7d98\" (UID: \"77dd4c6e-7dd3-4378-be3f-74f0c43fb371\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620898 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsnng\" (UniqueName: \"kubernetes.io/projected/0db061c0-7df0-4ca1-a388-c69dd9344b9c-kube-api-access-fsnng\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620924 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9n7\" (UniqueName: \"kubernetes.io/projected/77dd4c6e-7dd3-4378-be3f-74f0c43fb371-kube-api-access-7x9n7\") pod \"multus-admission-controller-857f4d67dd-s7d98\" (UID: \"77dd4c6e-7dd3-4378-be3f-74f0c43fb371\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620942 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdkn\" (UniqueName: \"kubernetes.io/projected/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-kube-api-access-lvdkn\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58cf121-cb7e-4eb7-9634-b72173bfa945-config-volume\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620985 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d0a361ba-f31a-477c-a532-136ebf0b025b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621001 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-csi-data-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621019 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621036 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-signing-cabundle\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621640 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzs64\" (UniqueName: \"kubernetes.io/projected/d8a50493-8d3e-4391-ad78-0bd93ce8157e-kube-api-access-mzs64\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-tmpfs\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621724 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-socket-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621747 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d64p\" (UniqueName: \"kubernetes.io/projected/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-kube-api-access-5d64p\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621794 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-apiservice-cert\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db061c0-7df0-4ca1-a388-c69dd9344b9c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621832 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e54e2fb-d821-4c19-a076-c47b738d1a48-proxy-tls\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621896 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4657c92e-5f11-45b4-bf64-91d04c42ace3-proxy-tls\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621911 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-srv-cert\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621932 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-metrics-tls\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bkwq\" (UniqueName: \"kubernetes.io/projected/36392dfb-bda3-46da-b8ba-ebc27ab22e00-kube-api-access-8bkwq\") pod \"package-server-manager-789f6589d5-9lrf6\" (UID: \"36392dfb-bda3-46da-b8ba-ebc27ab22e00\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-profile-collector-cert\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.621997 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622012 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8a50493-8d3e-4391-ad78-0bd93ce8157e-serving-cert\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622031 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x5w6\" (UniqueName: \"kubernetes.io/projected/12a4be20-2607-4502-b20d-b579c9987b57-kube-api-access-9x5w6\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622047 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db74784c-afbc-482a-8e2d-18c5bb898a9b-config-volume\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622078 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58cf121-cb7e-4eb7-9634-b72173bfa945-metrics-tls\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622098 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqwfz\" (UniqueName: \"kubernetes.io/projected/a58cf121-cb7e-4eb7-9634-b72173bfa945-kube-api-access-xqwfz\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622113 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7q8\" (UniqueName: \"kubernetes.io/projected/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-kube-api-access-rk7q8\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622146 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36392dfb-bda3-46da-b8ba-ebc27ab22e00-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9lrf6\" (UID: \"36392dfb-bda3-46da-b8ba-ebc27ab22e00\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622162 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86346561-5414-4c01-a202-6964f19b52db-cert\") pod \"ingress-canary-hjbqw\" (UID: \"86346561-5414-4c01-a202-6964f19b52db\") " pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622192 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcv5n\" (UniqueName: \"kubernetes.io/projected/db74784c-afbc-482a-8e2d-18c5bb898a9b-kube-api-access-dcv5n\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622218 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622243 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6508511-52da-41f5-a939-98342be6441e-node-bootstrap-token\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622262 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e54e2fb-d821-4c19-a076-c47b738d1a48-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622277 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d0a361ba-f31a-477c-a532-136ebf0b025b-srv-cert\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622300 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-webhook-cert\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622317 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-signing-key\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622339 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4657c92e-5f11-45b4-bf64-91d04c42ace3-images\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622355 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8a50493-8d3e-4391-ad78-0bd93ce8157e-config\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622379 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622401 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvgf\" (UniqueName: \"kubernetes.io/projected/f8ec1447-58ab-4c73-bc49-0da5b940c6cf-kube-api-access-vpvgf\") pod \"cluster-samples-operator-665b6dd947-4qhmh\" (UID: \"f8ec1447-58ab-4c73-bc49-0da5b940c6cf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db061c0-7df0-4ca1-a388-c69dd9344b9c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622460 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfxh\" (UniqueName: \"kubernetes.io/projected/a776531f-ebd1-491e-b6d7-378a11aad9d8-kube-api-access-mzfxh\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.622479 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mrq\" (UniqueName: \"kubernetes.io/projected/d0a361ba-f31a-477c-a532-136ebf0b025b-kube-api-access-q9mrq\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.624212 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-trusted-ca\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.625125 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db74784c-afbc-482a-8e2d-18c5bb898a9b-secret-volume\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.626324 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8355e616-674b-4bc2-a727-76609df63630-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jrjx5\" (UID: \"8355e616-674b-4bc2-a727-76609df63630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.626736 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/519e974b-b132-4b21-a47d-759e40bdbc72-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.628304 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-plugins-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.619921 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpq2\" (UniqueName: \"kubernetes.io/projected/b30c15c2-ac57-4e56-a55b-5b9de02e097f-kube-api-access-2cpq2\") pod \"downloads-7954f5f757-cmtm6\" (UID: \"b30c15c2-ac57-4e56-a55b-5b9de02e097f\") " pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.620045 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-mountpoint-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.628978 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db061c0-7df0-4ca1-a388-c69dd9344b9c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.628994 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-apiservice-cert\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.629151 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-csi-data-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.631766 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8ec1447-58ab-4c73-bc49-0da5b940c6cf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4qhmh\" (UID: \"f8ec1447-58ab-4c73-bc49-0da5b940c6cf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.631894 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-registration-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.632337 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/519e974b-b132-4b21-a47d-759e40bdbc72-config\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.633225 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a58cf121-cb7e-4eb7-9634-b72173bfa945-config-volume\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.633799 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d0a361ba-f31a-477c-a532-136ebf0b025b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.634301 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e54e2fb-d821-4c19-a076-c47b738d1a48-proxy-tls\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.634672 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4657c92e-5f11-45b4-bf64-91d04c42ace3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.636109 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-tmpfs\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.636209 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a776531f-ebd1-491e-b6d7-378a11aad9d8-socket-dir\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.636911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-signing-cabundle\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.637501 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4657c92e-5f11-45b4-bf64-91d04c42ace3-proxy-tls\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.638694 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-srv-cert\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.639032 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.639388 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.139371903 +0000 UTC m=+156.864677453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.639991 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4657c92e-5f11-45b4-bf64-91d04c42ace3-images\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.640109 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvq8j\" (UniqueName: \"kubernetes.io/projected/9260fa7e-9c98-4777-9625-3ac5501c883c-kube-api-access-kvq8j\") pod \"router-default-5444994796-wqwcd\" (UID: \"9260fa7e-9c98-4777-9625-3ac5501c883c\") " pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.641043 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-metrics-tls\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.641645 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db061c0-7df0-4ca1-a388-c69dd9344b9c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.642307 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8a50493-8d3e-4391-ad78-0bd93ce8157e-config\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.642312 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db74784c-afbc-482a-8e2d-18c5bb898a9b-config-volume\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.642849 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e54e2fb-d821-4c19-a076-c47b738d1a48-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.643301 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8a50493-8d3e-4391-ad78-0bd93ce8157e-serving-cert\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.646728 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77dd4c6e-7dd3-4378-be3f-74f0c43fb371-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s7d98\" (UID: \"77dd4c6e-7dd3-4378-be3f-74f0c43fb371\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.646756 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e6508511-52da-41f5-a939-98342be6441e-node-bootstrap-token\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.647101 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a58cf121-cb7e-4eb7-9634-b72173bfa945-metrics-tls\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.647138 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86346561-5414-4c01-a202-6964f19b52db-cert\") pod \"ingress-canary-hjbqw\" (UID: \"86346561-5414-4c01-a202-6964f19b52db\") " pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.647258 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-profile-collector-cert\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.647628 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d0a361ba-f31a-477c-a532-136ebf0b025b-srv-cert\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.647640 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e6508511-52da-41f5-a939-98342be6441e-certs\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.647665 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-webhook-cert\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.649078 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-signing-key\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.649282 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.649884 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/36392dfb-bda3-46da-b8ba-ebc27ab22e00-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9lrf6\" (UID: \"36392dfb-bda3-46da-b8ba-ebc27ab22e00\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.657409 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48bt8\" (UniqueName: \"kubernetes.io/projected/47acc23c-4409-4e15-a231-5c095917842d-kube-api-access-48bt8\") pod \"openshift-config-operator-7777fb866f-bfsqt\" (UID: \"47acc23c-4409-4e15-a231-5c095917842d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.679677 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9txr8\" (UniqueName: \"kubernetes.io/projected/592908e7-063e-4a05-8bfa-19d925c28be7-kube-api-access-9txr8\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjj76\" (UID: \"592908e7-063e-4a05-8bfa-19d925c28be7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.723124 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.723298 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.223258819 +0000 UTC m=+156.948564379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.723662 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.723997 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.223989272 +0000 UTC m=+156.949294832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.731185 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5lx\" (UniqueName: \"kubernetes.io/projected/26a0a4f9-321f-4196-88ce-888b82380eb6-kube-api-access-9h5lx\") pod \"oauth-openshift-558db77b4-jtc8t\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.738641 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.741025 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.750690 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhc9s\" (UniqueName: \"kubernetes.io/projected/fd0ecfef-29a6-474c-a266-ed16b5548797-kube-api-access-mhc9s\") pod \"etcd-operator-b45778765-qrt7r\" (UID: \"fd0ecfef-29a6-474c-a266-ed16b5548797\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.751531 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.765048 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/995bfe33-c190-48b3-bb6c-9c6cb81d8359-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xwt9s\" (UID: \"995bfe33-c190-48b3-bb6c-9c6cb81d8359\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.770482 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-r4gtf"] Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.780466 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.786155 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wf27\" (UniqueName: \"kubernetes.io/projected/52b4f91f-7c7b-401a-82b0-8907f6880677-kube-api-access-8wf27\") pod \"console-f9d7485db-ct2hz\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.800379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dd85c51-680b-4af2-8fac-8b9d94f7f2b6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dznkc\" (UID: \"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.803883 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.820757 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbv5f\" (UniqueName: \"kubernetes.io/projected/8508a38e-342a-4dab-956c-cc847d18e6bc-kube-api-access-vbv5f\") pod \"openshift-apiserver-operator-796bbdcf4f-p8lnf\" (UID: \"8508a38e-342a-4dab-956c-cc847d18e6bc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.825151 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.825739 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.325723624 +0000 UTC m=+157.051029184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.829695 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.834542 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.840678 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7g7\" (UniqueName: \"kubernetes.io/projected/4657c92e-5f11-45b4-bf64-91d04c42ace3-kube-api-access-xf7g7\") pod \"machine-config-operator-74547568cd-qqr56\" (UID: \"4657c92e-5f11-45b4-bf64-91d04c42ace3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.842562 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.849554 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.866668 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mrq\" (UniqueName: \"kubernetes.io/projected/d0a361ba-f31a-477c-a532-136ebf0b025b-kube-api-access-q9mrq\") pod \"olm-operator-6b444d44fb-hp7z2\" (UID: \"d0a361ba-f31a-477c-a532-136ebf0b025b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.882361 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn9gb\" (UniqueName: \"kubernetes.io/projected/86346561-5414-4c01-a202-6964f19b52db-kube-api-access-wn9gb\") pod \"ingress-canary-hjbqw\" (UID: \"86346561-5414-4c01-a202-6964f19b52db\") " pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.900273 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.900718 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndcsj\" (UniqueName: \"kubernetes.io/projected/e6508511-52da-41f5-a939-98342be6441e-kube-api-access-ndcsj\") pod \"machine-config-server-x9nr7\" (UID: \"e6508511-52da-41f5-a939-98342be6441e\") " pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.915379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7t5l\" (UniqueName: \"kubernetes.io/projected/6e54e2fb-d821-4c19-a076-c47b738d1a48-kube-api-access-h7t5l\") pod \"machine-config-controller-84d6567774-hw64n\" (UID: \"6e54e2fb-d821-4c19-a076-c47b738d1a48\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.919107 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.935597 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.935997 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm"] Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.936538 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:30 crc kubenswrapper[4721]: E0128 18:36:30.936969 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.436955857 +0000 UTC m=+157.162261427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.943292 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdrw\" (UniqueName: \"kubernetes.io/projected/5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f-kube-api-access-mbdrw\") pod \"packageserver-d55dfcdfc-blkkj\" (UID: \"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.956074 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2k9d\" (UniqueName: \"kubernetes.io/projected/a75256c5-8c48-43f3-9faf-15d661a26980-kube-api-access-v2k9d\") pod \"migrator-59844c95c7-gdtgs\" (UID: \"a75256c5-8c48-43f3-9faf-15d661a26980\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.958409 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.981325 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9n7\" (UniqueName: \"kubernetes.io/projected/77dd4c6e-7dd3-4378-be3f-74f0c43fb371-kube-api-access-7x9n7\") pod \"multus-admission-controller-857f4d67dd-s7d98\" (UID: \"77dd4c6e-7dd3-4378-be3f-74f0c43fb371\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:30 crc kubenswrapper[4721]: I0128 18:36:30.990562 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4x6c5"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.003014 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x9nr7" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.005294 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/519e974b-b132-4b21-a47d-759e40bdbc72-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-86g2n\" (UID: \"519e974b-b132-4b21-a47d-759e40bdbc72\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.018443 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hjbqw" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.038599 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.038907 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.538877865 +0000 UTC m=+157.264183435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.046603 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.047004 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.546992623 +0000 UTC m=+157.272298183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.055121 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nch6\" (UniqueName: \"kubernetes.io/projected/8355e616-674b-4bc2-a727-76609df63630-kube-api-access-2nch6\") pod \"control-plane-machine-set-operator-78cbb6b69f-jrjx5\" (UID: \"8355e616-674b-4bc2-a727-76609df63630\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.057602 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdkn\" (UniqueName: \"kubernetes.io/projected/70ba75a9-4e0e-4fb2-9986-030f8a02d39c-kube-api-access-lvdkn\") pod \"catalog-operator-68c6474976-4nmsk\" (UID: \"70ba75a9-4e0e-4fb2-9986-030f8a02d39c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.082137 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsnng\" (UniqueName: \"kubernetes.io/projected/0db061c0-7df0-4ca1-a388-c69dd9344b9c-kube-api-access-fsnng\") pod \"kube-storage-version-migrator-operator-b67b599dd-dkq9z\" (UID: \"0db061c0-7df0-4ca1-a388-c69dd9344b9c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.085369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bkwq\" (UniqueName: \"kubernetes.io/projected/36392dfb-bda3-46da-b8ba-ebc27ab22e00-kube-api-access-8bkwq\") pod \"package-server-manager-789f6589d5-9lrf6\" (UID: \"36392dfb-bda3-46da-b8ba-ebc27ab22e00\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.086071 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.129921 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d64p\" (UniqueName: \"kubernetes.io/projected/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-kube-api-access-5d64p\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.137705 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzs64\" (UniqueName: \"kubernetes.io/projected/d8a50493-8d3e-4391-ad78-0bd93ce8157e-kube-api-access-mzs64\") pod \"service-ca-operator-777779d784-2k27q\" (UID: \"d8a50493-8d3e-4391-ad78-0bd93ce8157e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.139284 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcv5n\" (UniqueName: \"kubernetes.io/projected/db74784c-afbc-482a-8e2d-18c5bb898a9b-kube-api-access-dcv5n\") pod \"collect-profiles-29493750-pb8r2\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.147827 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.148280 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.648264682 +0000 UTC m=+157.373570242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.156638 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x5w6\" (UniqueName: \"kubernetes.io/projected/12a4be20-2607-4502-b20d-b579c9987b57-kube-api-access-9x5w6\") pod \"marketplace-operator-79b997595-qp2vg\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:31 crc kubenswrapper[4721]: W0128 18:36:31.166368 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5081acb0_d928_4278_8d1f_207f7c3c3289.slice/crio-ad2b6f3596ea4659a6d68d5d3f3542e8e076b951672491d551980160b5102ee4 WatchSource:0}: Error finding container ad2b6f3596ea4659a6d68d5d3f3542e8e076b951672491d551980160b5102ee4: Status 404 returned error can't find the container with id ad2b6f3596ea4659a6d68d5d3f3542e8e076b951672491d551980160b5102ee4 Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.190178 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.191555 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.191982 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.192300 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.194083 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-96x8n\" (UID: \"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.208114 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.209972 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvgf\" (UniqueName: \"kubernetes.io/projected/f8ec1447-58ab-4c73-bc49-0da5b940c6cf-kube-api-access-vpvgf\") pod \"cluster-samples-operator-665b6dd947-4qhmh\" (UID: \"f8ec1447-58ab-4c73-bc49-0da5b940c6cf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.216464 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.231041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7q8\" (UniqueName: \"kubernetes.io/projected/4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26-kube-api-access-rk7q8\") pod \"service-ca-9c57cc56f-7b9dz\" (UID: \"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.231314 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.231344 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.231408 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.239358 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.246312 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.248316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfxh\" (UniqueName: \"kubernetes.io/projected/a776531f-ebd1-491e-b6d7-378a11aad9d8-kube-api-access-mzfxh\") pod \"csi-hostpathplugin-xtdkt\" (UID: \"a776531f-ebd1-491e-b6d7-378a11aad9d8\") " pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.249510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.250063 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.750041256 +0000 UTC m=+157.475346816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.262370 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.268060 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqwfz\" (UniqueName: \"kubernetes.io/projected/a58cf121-cb7e-4eb7-9634-b72173bfa945-kube-api-access-xqwfz\") pod \"dns-default-l59vq\" (UID: \"a58cf121-cb7e-4eb7-9634-b72173bfa945\") " pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.274731 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.276226 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.278356 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.288431 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.293694 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-76z8x"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.294410 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.310721 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wqwcd" event={"ID":"9260fa7e-9c98-4777-9625-3ac5501c883c","Type":"ContainerStarted","Data":"cb46b939adecd8713bf00db230ccb6bdb1df497834ce7f97b5ac4ed63c62f810"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.311922 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" event={"ID":"6e9fcebd-ee55-462a-ab16-b16840c83b25","Type":"ContainerStarted","Data":"210a162e1e6558d9b7d5a462840ed1e6667d1dc1d79a867923e67636225a8632"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.314938 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" event={"ID":"597a1c26-12f4-401b-bd2b-1842722282f2","Type":"ContainerStarted","Data":"1b3d96aec2386cbbc69203fddd4fa9dfb66a298d49665031fc6528cdc414aa21"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.314966 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" event={"ID":"597a1c26-12f4-401b-bd2b-1842722282f2","Type":"ContainerStarted","Data":"36f513e6ed1631a2c1bcf518baeeb33f1571254feaf48295753e93aec9b551e8"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.334053 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.340097 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" event={"ID":"a1242828-5fbb-4f54-a17b-cb26ab9dbec8","Type":"ContainerStarted","Data":"d0fed2e8792c53016e098f62a764aee70f9e8f17fa6486aa0e85457e610247ab"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.355032 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.355262 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.855237094 +0000 UTC m=+157.580542654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.355725 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.357549 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.857529834 +0000 UTC m=+157.582835584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.358037 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" event={"ID":"5081acb0-d928-4278-8d1f-207f7c3c3289","Type":"ContainerStarted","Data":"ad2b6f3596ea4659a6d68d5d3f3542e8e076b951672491d551980160b5102ee4"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.364895 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" event={"ID":"b56b3f24-3ef6-4506-ad1f-9498398f474f","Type":"ContainerStarted","Data":"453d580e0a8eaf03348e7ed1dc2f0c306cbd0633cabe0c5d85b6f14a8b430c1b"} Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.375002 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.380395 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.461277 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.461402 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.961360191 +0000 UTC m=+157.686665751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.461884 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.461518 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.463966 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:31.96394575 +0000 UTC m=+157.689251500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.470952 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.562884 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.563270 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.063251229 +0000 UTC m=+157.788556799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.564593 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.564914 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.064906079 +0000 UTC m=+157.790211639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.630028 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.667804 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.668099 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.168071935 +0000 UTC m=+157.893377545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.668343 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.668669 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.168662163 +0000 UTC m=+157.893967723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.772438 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cmtm6"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.772759 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qrt7r"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.772709 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.772786 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.272767289 +0000 UTC m=+157.998072859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.773234 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.773543 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.273532482 +0000 UTC m=+157.998838042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.876736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.877337 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.377317438 +0000 UTC m=+158.102622998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.904966 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.950077 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.958529 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" podStartSLOduration=121.958512832 podStartE2EDuration="2m1.958512832s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:31.955765788 +0000 UTC m=+157.681071358" watchObservedRunningTime="2026-01-28 18:36:31.958512832 +0000 UTC m=+157.683818392" Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.963245 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtc8t"] Jan 28 18:36:31 crc kubenswrapper[4721]: I0128 18:36:31.981857 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:31 crc kubenswrapper[4721]: E0128 18:36:31.982314 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.482299899 +0000 UTC m=+158.207605459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.083776 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.084145 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.584121725 +0000 UTC m=+158.309427285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.094585 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc"] Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.193637 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.194394 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.694382408 +0000 UTC m=+158.419687968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.195646 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" podStartSLOduration=122.195626616 podStartE2EDuration="2m2.195626616s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:32.192973465 +0000 UTC m=+157.918279055" watchObservedRunningTime="2026-01-28 18:36:32.195626616 +0000 UTC m=+157.920932176" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.305476 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.306116 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.806097606 +0000 UTC m=+158.531403176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.372433 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g474w" podStartSLOduration=122.372414125 podStartE2EDuration="2m2.372414125s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:32.370452125 +0000 UTC m=+158.095757685" watchObservedRunningTime="2026-01-28 18:36:32.372414125 +0000 UTC m=+158.097719685" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.409512 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.409579 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.409606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.409643 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.409705 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.420541 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:32.920522567 +0000 UTC m=+158.645828137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.421225 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.441448 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" event={"ID":"592908e7-063e-4a05-8bfa-19d925c28be7","Type":"ContainerStarted","Data":"9a9be666355467537d4d1af027ecbe41af83a1eaff5caa4ae185899969bb2ee0"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.479370 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.479635 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-76z8x" event={"ID":"6e7be82c-acf6-4120-8f43-221b6ef958c8","Type":"ContainerStarted","Data":"017478958b6687a19e57b01a15a9795fb64092bf6fe4cbb79aea5a93d2198bc7"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.480931 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.484223 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hjbqw"] Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.484785 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.491200 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" event={"ID":"26a0a4f9-321f-4196-88ce-888b82380eb6","Type":"ContainerStarted","Data":"af11226bbc0d918756d95ad6640ed161c6108d5c056105252954bb45309bf350"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.496273 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" event={"ID":"8508a38e-342a-4dab-956c-cc847d18e6bc","Type":"ContainerStarted","Data":"7c5aae6aae210210c284c8934d9a21434b354570c2906f6597212b21a3436e03"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.511771 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.512568 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.012547592 +0000 UTC m=+158.737853152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.535365 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" event={"ID":"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6","Type":"ContainerStarted","Data":"da64ef9a915e8c82c1d431f501953a40f2a4ec7b4308b181d8698c982f80d633"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.543969 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ct2hz"] Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.559673 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.627265 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" podStartSLOduration=122.624542299 podStartE2EDuration="2m2.624542299s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:32.6216449 +0000 UTC m=+158.346950460" watchObservedRunningTime="2026-01-28 18:36:32.624542299 +0000 UTC m=+158.349847859" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.630786 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.632456 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.13243549 +0000 UTC m=+158.857741060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.645535 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.656461 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.657618 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" event={"ID":"fd0ecfef-29a6-474c-a266-ed16b5548797","Type":"ContainerStarted","Data":"c387a40b300d492fab33a3cc4119e8b3719bfde288497b302b7ae3ed74b12cb4"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.670141 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x9nr7" event={"ID":"e6508511-52da-41f5-a939-98342be6441e","Type":"ContainerStarted","Data":"115dceb4cca705bc9139abb71de2eb43967f91a35efefcd15d244c591d31b3ea"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.704971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wqwcd" event={"ID":"9260fa7e-9c98-4777-9625-3ac5501c883c","Type":"ContainerStarted","Data":"9d32d03c693799f938cc11304b24d59b40d9bbfc6da6575f2a4cf8352e941871"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.712256 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cmtm6" event={"ID":"b30c15c2-ac57-4e56-a55b-5b9de02e097f","Type":"ContainerStarted","Data":"2c62fc0746961f66c381eeb24fd0bb37f9de8df6b39a99a18713232d6f8908d1"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.733726 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.734029 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.234014268 +0000 UTC m=+158.959319828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.734149 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" event={"ID":"47acc23c-4409-4e15-a231-5c095917842d","Type":"ContainerStarted","Data":"a20009833b617810859bb1a178fb0e3824e0d8db091c4c3e28bdfea3db68a73d"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.753573 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" event={"ID":"a1242828-5fbb-4f54-a17b-cb26ab9dbec8","Type":"ContainerStarted","Data":"63ff587775ede2fdf9d46650536ac7e0416fa3e23e97bf09bf708a09a8fb1475"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.763486 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" event={"ID":"4657c92e-5f11-45b4-bf64-91d04c42ace3","Type":"ContainerStarted","Data":"99e363199e3085f886cd04572d144a052ac71ec3fca983c70357c29e7db40bc2"} Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.775889 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" podStartSLOduration=122.775866759 podStartE2EDuration="2m2.775866759s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:32.775558729 +0000 UTC m=+158.500864289" watchObservedRunningTime="2026-01-28 18:36:32.775866759 +0000 UTC m=+158.501172319" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.808346 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.837406 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.840673 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.340659861 +0000 UTC m=+159.065965421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.939001 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.939501 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.439441503 +0000 UTC m=+159.164747063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:32 crc kubenswrapper[4721]: I0128 18:36:32.939938 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:32 crc kubenswrapper[4721]: E0128 18:36:32.941364 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.441345532 +0000 UTC m=+159.166651092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.014487 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-r4gtf" podStartSLOduration=123.014469638 podStartE2EDuration="2m3.014469638s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:32.970147862 +0000 UTC m=+158.695453422" watchObservedRunningTime="2026-01-28 18:36:33.014469638 +0000 UTC m=+158.739775198" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.041733 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.042099 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.542082334 +0000 UTC m=+159.267387894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.094985 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wqwcd" podStartSLOduration=123.094961911 podStartE2EDuration="2m3.094961911s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:33.016105639 +0000 UTC m=+158.741411199" watchObservedRunningTime="2026-01-28 18:36:33.094961911 +0000 UTC m=+158.820267471" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.113764 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" podStartSLOduration=123.113740456 podStartE2EDuration="2m3.113740456s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:33.093108354 +0000 UTC m=+158.818413914" watchObservedRunningTime="2026-01-28 18:36:33.113740456 +0000 UTC m=+158.839046016" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.145093 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.145431 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.645420145 +0000 UTC m=+159.370725705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.252329 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.252860 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.752831051 +0000 UTC m=+159.478136611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.354267 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.354909 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.854897184 +0000 UTC m=+159.580202744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.355124 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:33 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:33 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:33 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.355146 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.459776 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.460409 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:33.960389761 +0000 UTC m=+159.685695321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.563449 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.565241 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.065221689 +0000 UTC m=+159.790527249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.665433 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.666702 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.166681703 +0000 UTC m=+159.891987263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.768026 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.768466 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.268448266 +0000 UTC m=+159.993753826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.778959 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.779152 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.802452 4721 patch_prober.go:28] interesting pod/apiserver-76f77b778f-74cdf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]log ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]etcd ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/max-in-flight-filter ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 18:36:33 crc kubenswrapper[4721]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 18:36:33 crc kubenswrapper[4721]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/openshift.io-startinformers ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 18:36:33 crc kubenswrapper[4721]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 18:36:33 crc kubenswrapper[4721]: livez check failed Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.802515 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" podUID="e29aa9b1-ea23-453a-a624-634bf4f8c28b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.802925 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" event={"ID":"5081acb0-d928-4278-8d1f-207f7c3c3289","Type":"ContainerStarted","Data":"1dea16c5543680f77a63ff0b8a03ff406aa240ebd0e94ec5786fcd167ff325c8"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.808692 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" event={"ID":"4657c92e-5f11-45b4-bf64-91d04c42ace3","Type":"ContainerStarted","Data":"b499f076dfbec0e857964d72aa7ab103a6bed5af6cc3523d0c69d87ff41b7729"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.828967 4721 generic.go:334] "Generic (PLEG): container finished" podID="47acc23c-4409-4e15-a231-5c095917842d" containerID="aa34c7d39a880fc83808a645826d9a8b19455c344ce0e70d9ed69c82a77e8d7c" exitCode=0 Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.829088 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" event={"ID":"47acc23c-4409-4e15-a231-5c095917842d","Type":"ContainerDied","Data":"aa34c7d39a880fc83808a645826d9a8b19455c344ce0e70d9ed69c82a77e8d7c"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.833984 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-76z8x" event={"ID":"6e7be82c-acf6-4120-8f43-221b6ef958c8","Type":"ContainerStarted","Data":"0680007d53f9ab4bb4b4e7659fe3685e8441a598a50e318c5927e65eb7996fc9"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.834807 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.839245 4721 patch_prober.go:28] interesting pod/console-operator-58897d9998-76z8x container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.839293 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-76z8x" podUID="6e7be82c-acf6-4120-8f43-221b6ef958c8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.848066 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hjbqw" event={"ID":"86346561-5414-4c01-a202-6964f19b52db","Type":"ContainerStarted","Data":"ad916119509011a4b192692376a3396fbca4700f9d4a3359be18f9e76dbd2046"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.864316 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" event={"ID":"26a0a4f9-321f-4196-88ce-888b82380eb6","Type":"ContainerStarted","Data":"74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.867639 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.868788 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.869513 4721 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-jtc8t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.869576 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" podUID="26a0a4f9-321f-4196-88ce-888b82380eb6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.870298 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.370283262 +0000 UTC m=+160.095588822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.878162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" event={"ID":"7dd85c51-680b-4af2-8fac-8b9d94f7f2b6","Type":"ContainerStarted","Data":"0ce4929f0e9e30f6df4cceb9ccc36c102a9634c459912bb2845e4b94f0fc3655"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.888234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ms7xm" event={"ID":"6e9fcebd-ee55-462a-ab16-b16840c83b25","Type":"ContainerStarted","Data":"c5e052bfb12889267b5f821426a9f512f1df935fe97917e38771ae0e275923e7"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.892675 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cmtm6" event={"ID":"b30c15c2-ac57-4e56-a55b-5b9de02e097f","Type":"ContainerStarted","Data":"3105080a6941d69d02cf7d3da34cf5571a610e7344ba3397ffd6265f06201911"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.893043 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.896499 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" event={"ID":"592908e7-063e-4a05-8bfa-19d925c28be7","Type":"ContainerStarted","Data":"354cf7c787eef104a008e1e0ab48a48873a39d3e65c29aab7a95a73d96919796"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.904451 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.904518 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.923504 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ct2hz" event={"ID":"52b4f91f-7c7b-401a-82b0-8907f6880677","Type":"ContainerStarted","Data":"45cb2ef595adc47daf34972d7b4752a67370dc132a08c20dbe619e3365c51846"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.924070 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-76z8x" podStartSLOduration=123.924053618 podStartE2EDuration="2m3.924053618s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:33.922691135 +0000 UTC m=+159.647996715" watchObservedRunningTime="2026-01-28 18:36:33.924053618 +0000 UTC m=+159.649359178" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.926072 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hjbqw" podStartSLOduration=5.926060429 podStartE2EDuration="5.926060429s" podCreationTimestamp="2026-01-28 18:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:33.905539661 +0000 UTC m=+159.630845221" watchObservedRunningTime="2026-01-28 18:36:33.926060429 +0000 UTC m=+159.651365989" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.943199 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" event={"ID":"597a1c26-12f4-401b-bd2b-1842722282f2","Type":"ContainerStarted","Data":"1dc5c1c6e56a41fecb3ae9579b5b91f5f0eb56cbf0f9049a96755712d56944f5"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.946973 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cmtm6" podStartSLOduration=123.946957038 podStartE2EDuration="2m3.946957038s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:33.943319257 +0000 UTC m=+159.668624817" watchObservedRunningTime="2026-01-28 18:36:33.946957038 +0000 UTC m=+159.672262598" Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.968719 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x9nr7" event={"ID":"e6508511-52da-41f5-a939-98342be6441e","Type":"ContainerStarted","Data":"6314b933b6a594375c165c10a2fd05aaaebe50718c9e8a32cb4132992830090b"} Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.970317 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:33 crc kubenswrapper[4721]: E0128 18:36:33.975012 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.474990925 +0000 UTC m=+160.200296485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:33 crc kubenswrapper[4721]: I0128 18:36:33.979949 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjj76" podStartSLOduration=123.979924577 podStartE2EDuration="2m3.979924577s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:33.97579088 +0000 UTC m=+159.701096440" watchObservedRunningTime="2026-01-28 18:36:33.979924577 +0000 UTC m=+159.705230137" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.008670 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dznkc" podStartSLOduration=124.008653015 podStartE2EDuration="2m4.008653015s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.001826187 +0000 UTC m=+159.727131747" watchObservedRunningTime="2026-01-28 18:36:34.008653015 +0000 UTC m=+159.733958575" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.023029 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.023536 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.034574 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.059955 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" podStartSLOduration=124.059938385 podStartE2EDuration="2m4.059938385s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.033834236 +0000 UTC m=+159.759139816" watchObservedRunningTime="2026-01-28 18:36:34.059938385 +0000 UTC m=+159.785243945" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.060351 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ct2hz" podStartSLOduration=124.060322726 podStartE2EDuration="2m4.060322726s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.059637735 +0000 UTC m=+159.784943295" watchObservedRunningTime="2026-01-28 18:36:34.060322726 +0000 UTC m=+159.785628306" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.074577 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.074768 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.574739758 +0000 UTC m=+160.300045318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.075070 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.078046 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.578034249 +0000 UTC m=+160.303339809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.081865 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8c299" podStartSLOduration=124.081849515 podStartE2EDuration="2m4.081849515s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.07843115 +0000 UTC m=+159.803736710" watchObservedRunningTime="2026-01-28 18:36:34.081849515 +0000 UTC m=+159.807155085" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.123676 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:34 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:34 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:34 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.124243 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.128012 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-x9nr7" podStartSLOduration=6.127993977 podStartE2EDuration="6.127993977s" podCreationTimestamp="2026-01-28 18:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.118398313 +0000 UTC m=+159.843703883" watchObservedRunningTime="2026-01-28 18:36:34.127993977 +0000 UTC m=+159.853299527" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.179799 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.189947 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.689913482 +0000 UTC m=+160.415219042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.269421 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.281895 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.282238 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.782222826 +0000 UTC m=+160.507528386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.329935 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.368498 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5"] Jan 28 18:36:34 crc kubenswrapper[4721]: W0128 18:36:34.380366 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8355e616_674b_4bc2_a727_76609df63630.slice/crio-7bf746b452fb698be833b2dabcd2aee37ca4777fb6ce3c885f53a153b8bb7e9c WatchSource:0}: Error finding container 7bf746b452fb698be833b2dabcd2aee37ca4777fb6ce3c885f53a153b8bb7e9c: Status 404 returned error can't find the container with id 7bf746b452fb698be833b2dabcd2aee37ca4777fb6ce3c885f53a153b8bb7e9c Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.383919 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.384467 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.884442233 +0000 UTC m=+160.609747793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.405334 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qp2vg"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.425768 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s7d98"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.439607 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.487506 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.488628 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:34.98860773 +0000 UTC m=+160.713913290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.588878 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.589819 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.089803396 +0000 UTC m=+160.815108956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.646885 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.691776 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.692205 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.192157377 +0000 UTC m=+160.917462937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: W0128 18:36:34.695105 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb74784c_afbc_482a_8e2d_18c5bb898a9b.slice/crio-a9bd8cd2b1af3e929113f71a1a1b6952e4cecb85dd2c646215232cbda83addcb WatchSource:0}: Error finding container a9bd8cd2b1af3e929113f71a1a1b6952e4cecb85dd2c646215232cbda83addcb: Status 404 returned error can't find the container with id a9bd8cd2b1af3e929113f71a1a1b6952e4cecb85dd2c646215232cbda83addcb Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.779825 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.789067 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.794911 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7b9dz"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.795823 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.796565 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.296547701 +0000 UTC m=+161.021853261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.807718 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.814016 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2k27q"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.814161 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.823956 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.832457 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:34 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:34 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:34 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.832519 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.835412 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.871637 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xtdkt"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.877810 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.878943 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l59vq"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.888382 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs"] Jan 28 18:36:34 crc kubenswrapper[4721]: I0128 18:36:34.908396 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:34 crc kubenswrapper[4721]: E0128 18:36:34.908891 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.408872868 +0000 UTC m=+161.134178428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:34 crc kubenswrapper[4721]: W0128 18:36:34.937756 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db061c0_7df0_4ca1_a388_c69dd9344b9c.slice/crio-37317b6cebfdb768a14c1a04f6e715b412127962bac64ba202ddeead0aec04b0 WatchSource:0}: Error finding container 37317b6cebfdb768a14c1a04f6e715b412127962bac64ba202ddeead0aec04b0: Status 404 returned error can't find the container with id 37317b6cebfdb768a14c1a04f6e715b412127962bac64ba202ddeead0aec04b0 Jan 28 18:36:34 crc kubenswrapper[4721]: W0128 18:36:34.947432 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8a50493_8d3e_4391_ad78_0bd93ce8157e.slice/crio-24be210747d3184e4b10765c75c3dadcc94d426f309904b064052199e2c188fe WatchSource:0}: Error finding container 24be210747d3184e4b10765c75c3dadcc94d426f309904b064052199e2c188fe: Status 404 returned error can't find the container with id 24be210747d3184e4b10765c75c3dadcc94d426f309904b064052199e2c188fe Jan 28 18:36:34 crc kubenswrapper[4721]: W0128 18:36:34.964607 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58cf121_cb7e_4eb7_9634_b72173bfa945.slice/crio-bc3ba6358b954c5837ed93e44ffb96fa3c8727915abbf34f824f2cd5b894344c WatchSource:0}: Error finding container bc3ba6358b954c5837ed93e44ffb96fa3c8727915abbf34f824f2cd5b894344c: Status 404 returned error can't find the container with id bc3ba6358b954c5837ed93e44ffb96fa3c8727915abbf34f824f2cd5b894344c Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.005037 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" event={"ID":"519e974b-b132-4b21-a47d-759e40bdbc72","Type":"ContainerStarted","Data":"3c51c3528fc6dde8e78f97f1425c9a2de81485f70be0fb52a12b05cc61406aa6"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.009064 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.009797 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.509772845 +0000 UTC m=+161.235078405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.020406 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" event={"ID":"77dd4c6e-7dd3-4378-be3f-74f0c43fb371","Type":"ContainerStarted","Data":"ebb9b39ff2be977a892ad9d78dcc43168de62fa4cbf96eba78a7a19e31cddf97"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.023027 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" event={"ID":"d0a361ba-f31a-477c-a532-136ebf0b025b","Type":"ContainerStarted","Data":"9a7f22a98fb11eccb1360246c146bb5748fa9ffd408c8a5b1dd9b36ab998c06b"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.023085 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" event={"ID":"d0a361ba-f31a-477c-a532-136ebf0b025b","Type":"ContainerStarted","Data":"36ffbc6ebf2e4b3fbd9804f65f42e94508f454c22fa671c6cad76c29eae28709"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.024248 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.028384 4721 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hp7z2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.028463 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" podUID="d0a361ba-f31a-477c-a532-136ebf0b025b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.033312 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" event={"ID":"12a4be20-2607-4502-b20d-b579c9987b57","Type":"ContainerStarted","Data":"87c7141690dd93f2f02e025283721b8565fe912c08eceadb291e678f52c51b2a"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.033361 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" event={"ID":"12a4be20-2607-4502-b20d-b579c9987b57","Type":"ContainerStarted","Data":"6bd932bc1b2a4628c85b6263fbcc02011e0361a67427e01a134b17e5b1dd21e6"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.034255 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.041300 4721 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qp2vg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.041345 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.070807 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" podStartSLOduration=125.070788452 podStartE2EDuration="2m5.070788452s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.068518772 +0000 UTC m=+160.793824322" watchObservedRunningTime="2026-01-28 18:36:35.070788452 +0000 UTC m=+160.796094012" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.082777 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" event={"ID":"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26","Type":"ContainerStarted","Data":"cfa4267e05d47da53e7607f4168ad02754184f6a45603a64a864001cf3076d06"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.090892 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" event={"ID":"36392dfb-bda3-46da-b8ba-ebc27ab22e00","Type":"ContainerStarted","Data":"9ef975af857317d94ec0c9383ea6c252cc70d5776dbd9191c029b49f02f7c07d"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.095062 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" event={"ID":"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f","Type":"ContainerStarted","Data":"079d10a344c1c79b440909738d573a41575fb67635b3bd40b2cdaa364887aed9"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.110864 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.112681 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.612669443 +0000 UTC m=+161.337975003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.162381 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hjbqw" event={"ID":"86346561-5414-4c01-a202-6964f19b52db","Type":"ContainerStarted","Data":"ae3c9311dadeb3414a66ed48d97bbfbd1757a77ad5bb13a6f205b0247c129fda"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.189100 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" event={"ID":"5081acb0-d928-4278-8d1f-207f7c3c3289","Type":"ContainerStarted","Data":"7d556838fde6739092b5f35b1e21ee0c3a2ccc512b4d5f52bf490b39d5d1c669"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.212442 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" event={"ID":"fd0ecfef-29a6-474c-a266-ed16b5548797","Type":"ContainerStarted","Data":"738f89558056dad270f6bc7bc6627cdcf0df976bc94b4a92af021dfed4613be2"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.214720 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.215287 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.715264951 +0000 UTC m=+161.440570511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.215783 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" event={"ID":"a776531f-ebd1-491e-b6d7-378a11aad9d8","Type":"ContainerStarted","Data":"9e948ee5427c5c0f117f91ace7ca481e04e58630c993d40b46d64148c5708ce8"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.243427 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" podStartSLOduration=125.243400022 podStartE2EDuration="2m5.243400022s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.146049024 +0000 UTC m=+160.871354584" watchObservedRunningTime="2026-01-28 18:36:35.243400022 +0000 UTC m=+160.968705582" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.244305 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4x6c5" podStartSLOduration=125.24428721 podStartE2EDuration="2m5.24428721s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.232090056 +0000 UTC m=+160.957395626" watchObservedRunningTime="2026-01-28 18:36:35.24428721 +0000 UTC m=+160.969592770" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.248029 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" event={"ID":"d8a50493-8d3e-4391-ad78-0bd93ce8157e","Type":"ContainerStarted","Data":"24be210747d3184e4b10765c75c3dadcc94d426f309904b064052199e2c188fe"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.265703 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" event={"ID":"0db061c0-7df0-4ca1-a388-c69dd9344b9c","Type":"ContainerStarted","Data":"37317b6cebfdb768a14c1a04f6e715b412127962bac64ba202ddeead0aec04b0"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.268473 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" event={"ID":"8355e616-674b-4bc2-a727-76609df63630","Type":"ContainerStarted","Data":"dcb6c6fe0046f7e1f13ee32dfb52fde87b22ac6222ef25b60589964521466cf5"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.268529 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" event={"ID":"8355e616-674b-4bc2-a727-76609df63630","Type":"ContainerStarted","Data":"7bf746b452fb698be833b2dabcd2aee37ca4777fb6ce3c885f53a153b8bb7e9c"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.283614 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" event={"ID":"8508a38e-342a-4dab-956c-cc847d18e6bc","Type":"ContainerStarted","Data":"dc138228300521fcdf160f6e9c389eb39ef8b5be9ed7f5da734d3277f6f29201"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.296486 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" event={"ID":"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f","Type":"ContainerStarted","Data":"62a530ec6bf9e7fa012b00bc3c555db127b7c438e74de1b5b8cd0b07c5bc5b40"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.296592 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" event={"ID":"5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f","Type":"ContainerStarted","Data":"b55fce7571f5cd355d80d60a2fa271d1dcd5853bee3e9faa43efa256029f5688"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.299233 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.335356 4721 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-blkkj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.335432 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" podUID="5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.340378 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.341871 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.841854595 +0000 UTC m=+161.567160155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.355092 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" event={"ID":"6e54e2fb-d821-4c19-a076-c47b738d1a48","Type":"ContainerStarted","Data":"6cf782917091cb4529d41928adc6e1faf18e0f70d43db7a9fb12d8939fd16c26"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.404949 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qrt7r" podStartSLOduration=125.404919985 podStartE2EDuration="2m5.404919985s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.375666539 +0000 UTC m=+161.100972099" watchObservedRunningTime="2026-01-28 18:36:35.404919985 +0000 UTC m=+161.130225545" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.405462 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jrjx5" podStartSLOduration=125.40545668 podStartE2EDuration="2m5.40545668s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.40445892 +0000 UTC m=+161.129764510" watchObservedRunningTime="2026-01-28 18:36:35.40545668 +0000 UTC m=+161.130762230" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.409577 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" event={"ID":"db74784c-afbc-482a-8e2d-18c5bb898a9b","Type":"ContainerStarted","Data":"a9bd8cd2b1af3e929113f71a1a1b6952e4cecb85dd2c646215232cbda83addcb"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.440113 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" podStartSLOduration=125.44008656 podStartE2EDuration="2m5.44008656s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.438273755 +0000 UTC m=+161.163579325" watchObservedRunningTime="2026-01-28 18:36:35.44008656 +0000 UTC m=+161.165392120" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.442799 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.444139 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:35.944116094 +0000 UTC m=+161.669421654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.468137 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a11d239da81ba6cf6c2e74d2c1a217430dc468cc2fbadfb40b8af6111a6b2692"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.471642 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p8lnf" podStartSLOduration=125.471618165 podStartE2EDuration="2m5.471618165s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.46590813 +0000 UTC m=+161.191213700" watchObservedRunningTime="2026-01-28 18:36:35.471618165 +0000 UTC m=+161.196923725" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.515983 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" podStartSLOduration=125.515965802 podStartE2EDuration="2m5.515965802s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:35.51557129 +0000 UTC m=+161.240876860" watchObservedRunningTime="2026-01-28 18:36:35.515965802 +0000 UTC m=+161.241271362" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.546978 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.548051 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.048039463 +0000 UTC m=+161.773345023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.553592 4721 csr.go:261] certificate signing request csr-8t2j4 is approved, waiting to be issued Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.565393 4721 csr.go:257] certificate signing request csr-8t2j4 is issued Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.599597 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" event={"ID":"4657c92e-5f11-45b4-bf64-91d04c42ace3","Type":"ContainerStarted","Data":"71ed92e54160919e6a1d2c7e99e1cb2e9499a64d3581011c6f03aa163da4f3d4"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.607761 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" event={"ID":"47acc23c-4409-4e15-a231-5c095917842d","Type":"ContainerStarted","Data":"73c535d5e688f2179adbd035b2ca4dfe63387a7a424d0b2dfbedfcdacc4b4bdf"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.608445 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.614689 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l59vq" event={"ID":"a58cf121-cb7e-4eb7-9634-b72173bfa945","Type":"ContainerStarted","Data":"bc3ba6358b954c5837ed93e44ffb96fa3c8727915abbf34f824f2cd5b894344c"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.650066 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.650437 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.150420476 +0000 UTC m=+161.875726036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.651543 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" event={"ID":"70ba75a9-4e0e-4fb2-9986-030f8a02d39c","Type":"ContainerStarted","Data":"1a6260e595744b8d171be9bb6ffb353479edd213d9516347c8f91116ab6a1e16"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.651940 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.657637 4721 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4nmsk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.657711 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" podUID="70ba75a9-4e0e-4fb2-9986-030f8a02d39c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.674114 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ct2hz" event={"ID":"52b4f91f-7c7b-401a-82b0-8907f6880677","Type":"ContainerStarted","Data":"bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.687937 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" event={"ID":"995bfe33-c190-48b3-bb6c-9c6cb81d8359","Type":"ContainerStarted","Data":"d7168d2355adf36ff1e8a4d6ee6134ec86ddfb1cd63bdb21d75c1f6df767ee26"} Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.700550 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.700599 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.714973 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8kxsr" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.769015 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.770989 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.270975234 +0000 UTC m=+161.996280814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.786599 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-76z8x" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.829091 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:35 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:35 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:35 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.829137 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.891438 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.892399 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.392374478 +0000 UTC m=+162.117680038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:35 crc kubenswrapper[4721]: I0128 18:36:35.894035 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:35 crc kubenswrapper[4721]: E0128 18:36:35.899140 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.399126175 +0000 UTC m=+162.124431735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.006465 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.006872 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.506855681 +0000 UTC m=+162.232161241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.107845 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.108123 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.608110968 +0000 UTC m=+162.333416528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.189604 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" podStartSLOduration=126.189585711 podStartE2EDuration="2m6.189585711s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.156558321 +0000 UTC m=+161.881863891" watchObservedRunningTime="2026-01-28 18:36:36.189585711 +0000 UTC m=+161.914891261" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.210288 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.210702 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.710686977 +0000 UTC m=+162.435992537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.267981 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" podStartSLOduration=126.267960669 podStartE2EDuration="2m6.267960669s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.265519384 +0000 UTC m=+161.990824944" watchObservedRunningTime="2026-01-28 18:36:36.267960669 +0000 UTC m=+161.993266239" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.312000 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.312851 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.812838272 +0000 UTC m=+162.538143822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.322881 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" podStartSLOduration=126.322862339 podStartE2EDuration="2m6.322862339s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.302355571 +0000 UTC m=+162.027661141" watchObservedRunningTime="2026-01-28 18:36:36.322862339 +0000 UTC m=+162.048167899" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.324943 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qqr56" podStartSLOduration=126.324936892 podStartE2EDuration="2m6.324936892s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.322283661 +0000 UTC m=+162.047589221" watchObservedRunningTime="2026-01-28 18:36:36.324936892 +0000 UTC m=+162.050242452" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.413290 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.413690 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:36.913670087 +0000 UTC m=+162.638975657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.431744 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.516960 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.517500 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.017484713 +0000 UTC m=+162.742790273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.566363 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 18:31:35 +0000 UTC, rotation deadline is 2026-11-11 06:25:17.082520022 +0000 UTC Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.566409 4721 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6875h48m40.516113842s for next certificate rotation Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.618981 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.619392 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.119369301 +0000 UTC m=+162.844674871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.728308 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.728663 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.228648074 +0000 UTC m=+162.953953634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.738303 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" event={"ID":"f8ec1447-58ab-4c73-bc49-0da5b940c6cf","Type":"ContainerStarted","Data":"cea1a3467e1cf5ca8cd565d69919f83ba404d3afbd08d6f74ee5b47b4025f41a"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.738357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" event={"ID":"f8ec1447-58ab-4c73-bc49-0da5b940c6cf","Type":"ContainerStarted","Data":"0cd10ca352a48b41219260b905d14ef232e906c3706d52623cad87dd6535c777"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.738371 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" event={"ID":"f8ec1447-58ab-4c73-bc49-0da5b940c6cf","Type":"ContainerStarted","Data":"bae1cae4bf189e610104168960208262d802dc047a17cd29edad43b6e6db2ba7"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.756414 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" event={"ID":"0db061c0-7df0-4ca1-a388-c69dd9344b9c","Type":"ContainerStarted","Data":"a133017d53511e3650caece6062ed418eb8ec64f60fc85593f4093c261fbdedf"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.762105 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" event={"ID":"a75256c5-8c48-43f3-9faf-15d661a26980","Type":"ContainerStarted","Data":"77ff9b2c207de1f82d984a0b3629befc1500cdd57076151857d3135779160e05"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.762152 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" event={"ID":"a75256c5-8c48-43f3-9faf-15d661a26980","Type":"ContainerStarted","Data":"80bfc583a5e3df1260dde5ff97ab2f838800db3d66f1e477402372161b21135e"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.762163 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" event={"ID":"a75256c5-8c48-43f3-9faf-15d661a26980","Type":"ContainerStarted","Data":"2a1bcbc28918878e1faa47393ee4ebd576e14b31acb28e001b6b26bd6318a57f"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.764900 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" event={"ID":"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f","Type":"ContainerStarted","Data":"88262b49be1aa327cb9bb0bfe2187f1c219ef2007ff3ae4e4fb03f4cc3e2564f"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.764932 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" event={"ID":"eaf91cbf-9c86-4235-89c5-3ba0c1ea7d6f","Type":"ContainerStarted","Data":"57f18425bd0d171675274c8d62f86c12dab2c53df6ecc14a622e6937319b6981"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.771705 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" event={"ID":"db74784c-afbc-482a-8e2d-18c5bb898a9b","Type":"ContainerStarted","Data":"b7dc2c4ad7e11b8d1201093374a25307c75a3b135d8dfa9b07bbafd2a30f0fed"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.776661 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xwt9s" event={"ID":"995bfe33-c190-48b3-bb6c-9c6cb81d8359","Type":"ContainerStarted","Data":"89f23f519ea2505ec496269d5fe3ee9910ccaf571b1747a6f3389a2ced4b99e3"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.778781 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0717b28e62f0ea4f123eb69a96ef374efe71631eeadc81870d8b98f995adbcd2"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.779913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" event={"ID":"d8a50493-8d3e-4391-ad78-0bd93ce8157e","Type":"ContainerStarted","Data":"c905735c72a03210858e92b159ecd3a18bfd7dcdb6d32f48a1aa9e7623b2fab3"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.807047 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.824478 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" event={"ID":"36392dfb-bda3-46da-b8ba-ebc27ab22e00","Type":"ContainerStarted","Data":"9cf474db913fd9ded746581d1d83ae037b9e099b5138eb8205ceafc0249a5ace"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.825417 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" event={"ID":"36392dfb-bda3-46da-b8ba-ebc27ab22e00","Type":"ContainerStarted","Data":"b8a5295e67639ab3e4b29d797dd13457704ad73d4571871bce9a9410e8d26a0b"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.830794 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.831147 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.331107618 +0000 UTC m=+163.056413178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.836450 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.837231 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.337210725 +0000 UTC m=+163.062516305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.843952 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:36 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:36 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:36 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.844011 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.845063 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qhmh" podStartSLOduration=126.845050505 podStartE2EDuration="2m6.845050505s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.808953961 +0000 UTC m=+162.534259521" watchObservedRunningTime="2026-01-28 18:36:36.845050505 +0000 UTC m=+162.570356065" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.853132 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" event={"ID":"6e54e2fb-d821-4c19-a076-c47b738d1a48","Type":"ContainerStarted","Data":"1bbd042e7a6ded968c5fb97d13583f8f5f86b03f1b5fcc656a23fe2ce12d1bf2"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.853188 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" event={"ID":"6e54e2fb-d821-4c19-a076-c47b738d1a48","Type":"ContainerStarted","Data":"a1301e39ec25190e17422efaeef73c444c791f6aae292bbd6216a13c5d6af449"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.879496 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dkq9z" podStartSLOduration=126.879474928 podStartE2EDuration="2m6.879474928s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.878134857 +0000 UTC m=+162.603440417" watchObservedRunningTime="2026-01-28 18:36:36.879474928 +0000 UTC m=+162.604780488" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.892200 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7b4a77a43abe653fbe7f23ecac655994e659a6c34f5118f542432b9df53ebced"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.892963 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7d1622cba4f45ca7103020355b79d0281550e1b05ca4c3307816e63c299c04c7"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.893263 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.910680 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" event={"ID":"519e974b-b132-4b21-a47d-759e40bdbc72","Type":"ContainerStarted","Data":"daabea0e2a848cc0d9d91a138dba148e02581d8f56abac3ff58ad4623aad2d5e"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.913311 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" event={"ID":"70ba75a9-4e0e-4fb2-9986-030f8a02d39c","Type":"ContainerStarted","Data":"812a23cc0632f147c02b0bf4e3f96c7a3fde1e1f0b5a1698f1f04a4debe8c1dc"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.914632 4721 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4nmsk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.914677 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" podUID="70ba75a9-4e0e-4fb2-9986-030f8a02d39c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.925850 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" event={"ID":"77dd4c6e-7dd3-4378-be3f-74f0c43fb371","Type":"ContainerStarted","Data":"73c9ba628afd97972d1c888aaaea38d45624bd713314ff95e00bf20093ba76c7"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.926220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" event={"ID":"77dd4c6e-7dd3-4378-be3f-74f0c43fb371","Type":"ContainerStarted","Data":"db8eaca217d9c53e7fc18570686c0cb41add36f1bba1ba22fcb5a8fd0e6ea54a"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.942187 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:36 crc kubenswrapper[4721]: E0128 18:36:36.944082 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.444061805 +0000 UTC m=+163.169367365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.946388 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" event={"ID":"4f8f829a-0e7b-4ad6-9dc4-ce845d2e9d26","Type":"ContainerStarted","Data":"45de4a8d41fe37a0f7f7a9079d2ae5987dc994d19d2a434d5fd85e528d6ebaaa"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.951894 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l59vq" event={"ID":"a58cf121-cb7e-4eb7-9634-b72173bfa945","Type":"ContainerStarted","Data":"18c19d820d0858d9c56759118a67b591a8e84b58f948c8970dd3127080d13c06"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.952255 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.971446 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"15ef29f35e0075e694c44a390e7d92798470b179d0db4febd0f86cad488d9934"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.971489 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"713af9285862184775841985c9a4a47fc5e0801e211db8013a677eb0e72a879c"} Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.977638 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" podStartSLOduration=126.9775917 podStartE2EDuration="2m6.9775917s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:36.967031157 +0000 UTC m=+162.692336717" watchObservedRunningTime="2026-01-28 18:36:36.9775917 +0000 UTC m=+162.702897260" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.981716 4721 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qp2vg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.981775 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 18:36:36 crc kubenswrapper[4721]: I0128 18:36:36.992828 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hp7z2" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.041894 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-96x8n" podStartSLOduration=127.041876417 podStartE2EDuration="2m7.041876417s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.035223883 +0000 UTC m=+162.760529453" watchObservedRunningTime="2026-01-28 18:36:37.041876417 +0000 UTC m=+162.767181977" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.044898 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.047057 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.547040605 +0000 UTC m=+163.272346165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.090052 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gdtgs" podStartSLOduration=127.090037901 podStartE2EDuration="2m7.090037901s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.087012718 +0000 UTC m=+162.812318278" watchObservedRunningTime="2026-01-28 18:36:37.090037901 +0000 UTC m=+162.815343461" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.119233 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2k27q" podStartSLOduration=127.119217913 podStartE2EDuration="2m7.119217913s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.114503649 +0000 UTC m=+162.839809209" watchObservedRunningTime="2026-01-28 18:36:37.119217913 +0000 UTC m=+162.844523473" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.146151 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.146331 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.646304582 +0000 UTC m=+163.371610142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.146440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.148096 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.648074536 +0000 UTC m=+163.373380096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.191567 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-86g2n" podStartSLOduration=127.191544946 podStartE2EDuration="2m7.191544946s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.176357012 +0000 UTC m=+162.901662592" watchObservedRunningTime="2026-01-28 18:36:37.191544946 +0000 UTC m=+162.916850506" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.191819 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-s7d98" podStartSLOduration=127.191810724 podStartE2EDuration="2m7.191810724s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.189986678 +0000 UTC m=+162.915292238" watchObservedRunningTime="2026-01-28 18:36:37.191810724 +0000 UTC m=+162.917116284" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.249760 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.250378 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.750346425 +0000 UTC m=+163.475651995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.262775 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l59vq" podStartSLOduration=9.262755885 podStartE2EDuration="9.262755885s" podCreationTimestamp="2026-01-28 18:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.258935188 +0000 UTC m=+162.984240748" watchObservedRunningTime="2026-01-28 18:36:37.262755885 +0000 UTC m=+162.988061445" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.282836 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7b9dz" podStartSLOduration=127.282819869 podStartE2EDuration="2m7.282819869s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.282368925 +0000 UTC m=+163.007674485" watchObservedRunningTime="2026-01-28 18:36:37.282819869 +0000 UTC m=+163.008125429" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.337567 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hw64n" podStartSLOduration=127.337548363 podStartE2EDuration="2m7.337548363s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:37.316544681 +0000 UTC m=+163.041850251" watchObservedRunningTime="2026-01-28 18:36:37.337548363 +0000 UTC m=+163.062853923" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.351904 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.352294 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.852279924 +0000 UTC m=+163.577585484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.453386 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.453722 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.953699257 +0000 UTC m=+163.679004827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.454040 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.454467 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:37.95445626 +0000 UTC m=+163.679761870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.556328 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.556846 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.056820012 +0000 UTC m=+163.782125572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.657791 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.658279 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.158263446 +0000 UTC m=+163.883569006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.759416 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.759669 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.259634387 +0000 UTC m=+163.984939957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.759906 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.760336 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.260321887 +0000 UTC m=+163.985627617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.811009 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:37 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:37 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:37 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.811078 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.861348 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.861544 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.361508033 +0000 UTC m=+164.086813603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.861714 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.862239 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.362214905 +0000 UTC m=+164.087520465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.962799 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.962987 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.462964038 +0000 UTC m=+164.188269598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.963149 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:37 crc kubenswrapper[4721]: E0128 18:36:37.963457 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.463446353 +0000 UTC m=+164.188751913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.973041 4721 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-blkkj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.973163 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" podUID="5f55d8f7-6300-4d3c-8cdf-d4e2d106fa5f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.980594 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" event={"ID":"a776531f-ebd1-491e-b6d7-378a11aad9d8","Type":"ContainerStarted","Data":"97dadc7467a77ae8004224d004aa673cd95605f24ac480f5807f52090d9f1a04"} Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.983475 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l59vq" event={"ID":"a58cf121-cb7e-4eb7-9634-b72173bfa945","Type":"ContainerStarted","Data":"b69031aa73cf7926669a859e9606d062a8e08a237d902fe33e3284444d695274"} Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.985458 4721 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qp2vg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.985512 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.992464 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4nmsk" Jan 28 18:36:37 crc kubenswrapper[4721]: I0128 18:36:37.995572 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bfsqt" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.064956 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.065463 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.565381551 +0000 UTC m=+164.290687111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.065706 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.066513 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.566478445 +0000 UTC m=+164.291784185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.166978 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.167213 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.667178056 +0000 UTC m=+164.392483626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.168045 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.169218 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.669198177 +0000 UTC m=+164.394503747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.269886 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.270076 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.770052833 +0000 UTC m=+164.495358403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.270208 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.270628 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.77061911 +0000 UTC m=+164.495924680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.371634 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.371805 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.871779885 +0000 UTC m=+164.597085455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.371896 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.372268 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.87225928 +0000 UTC m=+164.597564840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.472886 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.473091 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.973061804 +0000 UTC m=+164.698367374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.473153 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.473521 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:38.973505348 +0000 UTC m=+164.698810968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.574312 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.574525 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.074492577 +0000 UTC m=+164.799798147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.574697 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.575157 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.075140547 +0000 UTC m=+164.800446117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.676083 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.676184 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.176153488 +0000 UTC m=+164.901459048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.676295 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.676597 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.176587771 +0000 UTC m=+164.901893331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.777225 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.777253 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.27723558 +0000 UTC m=+165.002541140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.777426 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.777717 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.277710535 +0000 UTC m=+165.003016095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.783993 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.796663 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-74cdf" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.808188 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:38 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:38 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:38 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.808245 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.846567 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6k9rr"] Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.847899 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.852733 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.877704 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k9rr"] Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.878327 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.878733 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.378712715 +0000 UTC m=+165.104018285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.979478 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjs9\" (UniqueName: \"kubernetes.io/projected/e1764268-02a2-46af-a94d-b9f32dabcab8-kube-api-access-rxjs9\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.979735 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.979853 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-catalog-content\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:38 crc kubenswrapper[4721]: I0128 18:36:38.980000 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-utilities\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:38 crc kubenswrapper[4721]: E0128 18:36:38.981088 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.481066946 +0000 UTC m=+165.206372506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.057457 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9khl"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.058462 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.061594 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.061842 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" event={"ID":"a776531f-ebd1-491e-b6d7-378a11aad9d8","Type":"ContainerStarted","Data":"9aac1b03c51ec34e3237375adc128fd74149fcd83cc84ada7a67d66704a8b2a9"} Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.079246 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9khl"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.080682 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.080895 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-catalog-content\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.080936 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-utilities\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.080986 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjs9\" (UniqueName: \"kubernetes.io/projected/e1764268-02a2-46af-a94d-b9f32dabcab8-kube-api-access-rxjs9\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.081553 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.58153795 +0000 UTC m=+165.306843500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.081553 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-catalog-content\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.082387 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-utilities\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.119444 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjs9\" (UniqueName: \"kubernetes.io/projected/e1764268-02a2-46af-a94d-b9f32dabcab8-kube-api-access-rxjs9\") pod \"certified-operators-6k9rr\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.182640 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-utilities\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.182671 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-catalog-content\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.182766 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnzk5\" (UniqueName: \"kubernetes.io/projected/384e21cc-b8a7-4a62-b817-d985bde07d66-kube-api-access-fnzk5\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.182889 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.185054 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.685041977 +0000 UTC m=+165.410347537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.233543 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ktm7m"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.234464 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.245757 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.249109 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktm7m"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.284188 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.284344 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzk5\" (UniqueName: \"kubernetes.io/projected/384e21cc-b8a7-4a62-b817-d985bde07d66-kube-api-access-fnzk5\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.284426 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-utilities\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.284448 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-catalog-content\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.285266 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-catalog-content\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.285343 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.785326326 +0000 UTC m=+165.510631886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.285762 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-utilities\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.303589 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzk5\" (UniqueName: \"kubernetes.io/projected/384e21cc-b8a7-4a62-b817-d985bde07d66-kube-api-access-fnzk5\") pod \"community-operators-f9khl\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.385808 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.385874 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw86b\" (UniqueName: \"kubernetes.io/projected/d093e4ed-b49f-4abb-9cab-67d8072aea98-kube-api-access-dw86b\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.385907 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-catalog-content\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.385978 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-utilities\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.386252 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.886232073 +0000 UTC m=+165.611537633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.388251 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.434014 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7nqgw"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.435183 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.487888 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.489086 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-catalog-content\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.489207 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-utilities\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.489277 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.989255635 +0000 UTC m=+165.714561195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.489322 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.489390 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw86b\" (UniqueName: \"kubernetes.io/projected/d093e4ed-b49f-4abb-9cab-67d8072aea98-kube-api-access-dw86b\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.489935 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:39.989925445 +0000 UTC m=+165.715231005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.500104 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-utilities\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.500412 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-catalog-content\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.510205 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7nqgw"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.519118 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw86b\" (UniqueName: \"kubernetes.io/projected/d093e4ed-b49f-4abb-9cab-67d8072aea98-kube-api-access-dw86b\") pod \"certified-operators-ktm7m\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.552460 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.590626 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.590830 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.090802971 +0000 UTC m=+165.816108531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.591712 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmsbc\" (UniqueName: \"kubernetes.io/projected/791d827f-b809-4f3d-94d0-02a6722550e0-kube-api-access-rmsbc\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.591753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-utilities\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.592363 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.592397 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-catalog-content\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.592654 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.092644508 +0000 UTC m=+165.817950058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.697808 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.697981 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.197953719 +0000 UTC m=+165.923259289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.698542 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmsbc\" (UniqueName: \"kubernetes.io/projected/791d827f-b809-4f3d-94d0-02a6722550e0-kube-api-access-rmsbc\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.698592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-utilities\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.698641 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.698668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-catalog-content\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.699129 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-catalog-content\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.699609 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-utilities\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.699674 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.199661632 +0000 UTC m=+165.924967292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.753997 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmsbc\" (UniqueName: \"kubernetes.io/projected/791d827f-b809-4f3d-94d0-02a6722550e0-kube-api-access-rmsbc\") pod \"community-operators-7nqgw\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.764788 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k9rr"] Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.776458 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.804031 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.804449 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.304432497 +0000 UTC m=+166.029738057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.839374 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:39 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:39 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:39 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.839430 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.905632 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:39 crc kubenswrapper[4721]: E0128 18:36:39.905994 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.405978204 +0000 UTC m=+166.131283764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:39 crc kubenswrapper[4721]: I0128 18:36:39.937827 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9khl"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.007996 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:40 crc kubenswrapper[4721]: E0128 18:36:40.008763 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.508745058 +0000 UTC m=+166.234050618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.058848 4721 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.099373 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" event={"ID":"a776531f-ebd1-491e-b6d7-378a11aad9d8","Type":"ContainerStarted","Data":"7047fde97e3bd00bc0b57f4ba9b69907d96e864dd437d04370a2a9ac3da82442"} Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.108081 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerStarted","Data":"a46eb6affac7cac086b159218e6230201fd728ca8f70ccc6fc00dad3fe8b7832"} Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.111282 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:40 crc kubenswrapper[4721]: E0128 18:36:40.111627 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:36:40.611614775 +0000 UTC m=+166.336920335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-b42n2" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.118321 4721 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T18:36:40.058880632Z","Handler":null,"Name":""} Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.124521 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerStarted","Data":"d86539b904cf398da206fa5711a8e91c8bacb97f91c5f41384b2d81a9aa658ff"} Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.138258 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktm7m"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.140458 4721 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.140499 4721 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.224994 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.236578 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.313094 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7nqgw"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.326820 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.358766 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.358838 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.409728 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-b42n2\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.694626 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.753407 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.753461 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.754441 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.754532 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.805039 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.817923 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:40 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:40 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:40 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.817990 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.830246 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cql6x"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.833752 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.839464 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.842496 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cql6x"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.889324 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.891456 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.899067 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.899276 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.907059 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:36:40 crc kubenswrapper[4721]: I0128 18:36:40.966565 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-blkkj" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.036880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cebab97-f0a5-4073-837b-4d985864ad73-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.036955 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cebab97-f0a5-4073-837b-4d985864ad73-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.036991 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-catalog-content\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.037032 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8s9\" (UniqueName: \"kubernetes.io/projected/36456b90-3e11-4480-b235-5909103844ba-kube-api-access-4n8s9\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.037084 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-utilities\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.053977 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b42n2"] Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.087905 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.087942 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.092628 4721 patch_prober.go:28] interesting pod/console-f9d7485db-ct2hz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.092672 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ct2hz" podUID="52b4f91f-7c7b-401a-82b0-8907f6880677" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.138367 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cebab97-f0a5-4073-837b-4d985864ad73-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.138406 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cebab97-f0a5-4073-837b-4d985864ad73-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.138427 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-catalog-content\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.138454 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8s9\" (UniqueName: \"kubernetes.io/projected/36456b90-3e11-4480-b235-5909103844ba-kube-api-access-4n8s9\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.138496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-utilities\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.138486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cebab97-f0a5-4073-837b-4d985864ad73-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.139371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-utilities\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.139495 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-catalog-content\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.157263 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" event={"ID":"a776531f-ebd1-491e-b6d7-378a11aad9d8","Type":"ContainerStarted","Data":"73ce4370369522a3f2e3158f2c066fa2b707ed9a2f13686495410f8800929670"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.168382 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8s9\" (UniqueName: \"kubernetes.io/projected/36456b90-3e11-4480-b235-5909103844ba-kube-api-access-4n8s9\") pod \"redhat-marketplace-cql6x\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.170567 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cebab97-f0a5-4073-837b-4d985864ad73-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.174361 4721 generic.go:334] "Generic (PLEG): container finished" podID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerID="7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0" exitCode=0 Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.175155 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerDied","Data":"7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.175229 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerStarted","Data":"d15c110d736d7c554b8835f215f55a3d17e2a585c85959fcbd2be0da6f8ad4b0"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.179710 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xtdkt" podStartSLOduration=13.179693924 podStartE2EDuration="13.179693924s" podCreationTimestamp="2026-01-28 18:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:41.175454584 +0000 UTC m=+166.900760164" watchObservedRunningTime="2026-01-28 18:36:41.179693924 +0000 UTC m=+166.904999484" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.185351 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.189427 4721 generic.go:334] "Generic (PLEG): container finished" podID="791d827f-b809-4f3d-94d0-02a6722550e0" containerID="5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10" exitCode=0 Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.189515 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerDied","Data":"5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.189549 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerStarted","Data":"cc8a6281477ea8b7bf7199271938a8a825203834209f3d73e4a4f3f62388eee6"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.195792 4721 generic.go:334] "Generic (PLEG): container finished" podID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerID="ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d" exitCode=0 Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.195881 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerDied","Data":"ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.198272 4721 generic.go:334] "Generic (PLEG): container finished" podID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerID="8fb48a3b018e2a8308eafd384f4c56af40ce9007e976e65123da3dccd3b29cb4" exitCode=0 Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.198321 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerDied","Data":"8fb48a3b018e2a8308eafd384f4c56af40ce9007e976e65123da3dccd3b29cb4"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.200824 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" event={"ID":"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc","Type":"ContainerStarted","Data":"1e5191bf91b999db32044005770d0297159cb8d6ad09dc038d9377e841fc49d0"} Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.224127 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.252222 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vlsl5"] Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.261927 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlsl5"] Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.262499 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.303003 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.447286 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-utilities\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.447331 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-catalog-content\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.447360 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgpqd\" (UniqueName: \"kubernetes.io/projected/cfea609e-20ae-449d-8952-ac4691aaec30-kube-api-access-lgpqd\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.469776 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.539043 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.548334 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-utilities\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.548390 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-catalog-content\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.548421 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgpqd\" (UniqueName: \"kubernetes.io/projected/cfea609e-20ae-449d-8952-ac4691aaec30-kube-api-access-lgpqd\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.549329 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-utilities\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.549595 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-catalog-content\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.571451 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgpqd\" (UniqueName: \"kubernetes.io/projected/cfea609e-20ae-449d-8952-ac4691aaec30-kube-api-access-lgpqd\") pod \"redhat-marketplace-vlsl5\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.621647 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.622879 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.762017 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cql6x"] Jan 28 18:36:41 crc kubenswrapper[4721]: W0128 18:36:41.803834 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36456b90_3e11_4480_b235_5909103844ba.slice/crio-d1be8c0b94e5d18eaf97f1ee63331444ce8027842fd6e03ff7d77403e38f464c WatchSource:0}: Error finding container d1be8c0b94e5d18eaf97f1ee63331444ce8027842fd6e03ff7d77403e38f464c: Status 404 returned error can't find the container with id d1be8c0b94e5d18eaf97f1ee63331444ce8027842fd6e03ff7d77403e38f464c Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.810235 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:41 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:41 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:41 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:41 crc kubenswrapper[4721]: I0128 18:36:41.810287 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.007370 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlsl5"] Jan 28 18:36:42 crc kubenswrapper[4721]: W0128 18:36:42.082419 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfea609e_20ae_449d_8952_ac4691aaec30.slice/crio-a0afc2cf54480fc4811aac793213838b21acbe684163f4397ace0068f5baccb1 WatchSource:0}: Error finding container a0afc2cf54480fc4811aac793213838b21acbe684163f4397ace0068f5baccb1: Status 404 returned error can't find the container with id a0afc2cf54480fc4811aac793213838b21acbe684163f4397ace0068f5baccb1 Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.228558 4721 generic.go:334] "Generic (PLEG): container finished" podID="db74784c-afbc-482a-8e2d-18c5bb898a9b" containerID="b7dc2c4ad7e11b8d1201093374a25307c75a3b135d8dfa9b07bbafd2a30f0fed" exitCode=0 Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.228905 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" event={"ID":"db74784c-afbc-482a-8e2d-18c5bb898a9b","Type":"ContainerDied","Data":"b7dc2c4ad7e11b8d1201093374a25307c75a3b135d8dfa9b07bbafd2a30f0fed"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.233361 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerStarted","Data":"a0afc2cf54480fc4811aac793213838b21acbe684163f4397ace0068f5baccb1"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.240159 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4q59"] Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.241229 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.243269 4721 generic.go:334] "Generic (PLEG): container finished" podID="36456b90-3e11-4480-b235-5909103844ba" containerID="8b1368d82d594e2e8c675381a8cc7164c78bb1519332270fa656997eeae34e93" exitCode=0 Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.243316 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.243400 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerDied","Data":"8b1368d82d594e2e8c675381a8cc7164c78bb1519332270fa656997eeae34e93"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.243435 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerStarted","Data":"d1be8c0b94e5d18eaf97f1ee63331444ce8027842fd6e03ff7d77403e38f464c"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.246309 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4q59"] Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.249289 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cebab97-f0a5-4073-837b-4d985864ad73","Type":"ContainerStarted","Data":"0c705e247b088e9642472fae79bd623f8ca704a270b73763abf461208aa81aff"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.249328 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cebab97-f0a5-4073-837b-4d985864ad73","Type":"ContainerStarted","Data":"b2913bbb76bce0e5cd2da3e787df4c1d00d3a7132aa6dee4e4fc6d227671700c"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.252284 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" event={"ID":"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc","Type":"ContainerStarted","Data":"1033192df353e832de0c4ee8fdcdffd87f44695d410cae5349bf010ba6768cff"} Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.300834 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-catalog-content\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.300961 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-utilities\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.300995 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl56\" (UniqueName: \"kubernetes.io/projected/0d6d2129-7840-4dc5-941b-541507dfd482-kube-api-access-dcl56\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.394983 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" podStartSLOduration=132.394951795 podStartE2EDuration="2m12.394951795s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:42.375896212 +0000 UTC m=+168.101201792" watchObservedRunningTime="2026-01-28 18:36:42.394951795 +0000 UTC m=+168.120257355" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.401796 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-utilities\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.401891 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl56\" (UniqueName: \"kubernetes.io/projected/0d6d2129-7840-4dc5-941b-541507dfd482-kube-api-access-dcl56\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.401927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-catalog-content\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.403249 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-catalog-content\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.403498 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-utilities\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.452835 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl56\" (UniqueName: \"kubernetes.io/projected/0d6d2129-7840-4dc5-941b-541507dfd482-kube-api-access-dcl56\") pod \"redhat-operators-d4q59\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.647809 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.64779184 podStartE2EDuration="2.64779184s" podCreationTimestamp="2026-01-28 18:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:42.436107834 +0000 UTC m=+168.161413394" watchObservedRunningTime="2026-01-28 18:36:42.64779184 +0000 UTC m=+168.373097400" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.647946 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nx6vw"] Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.649203 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.683401 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx6vw"] Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.687834 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.808497 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-catalog-content\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.808753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-utilities\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.808778 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghb65\" (UniqueName: \"kubernetes.io/projected/7ac4d9d7-c104-455a-b162-75b3bbf2a879-kube-api-access-ghb65\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.810329 4721 patch_prober.go:28] interesting pod/router-default-5444994796-wqwcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:36:42 crc kubenswrapper[4721]: [-]has-synced failed: reason withheld Jan 28 18:36:42 crc kubenswrapper[4721]: [+]process-running ok Jan 28 18:36:42 crc kubenswrapper[4721]: healthz check failed Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.810406 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wqwcd" podUID="9260fa7e-9c98-4777-9625-3ac5501c883c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.910689 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghb65\" (UniqueName: \"kubernetes.io/projected/7ac4d9d7-c104-455a-b162-75b3bbf2a879-kube-api-access-ghb65\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.910747 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-utilities\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.910804 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-catalog-content\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.911416 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-utilities\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.911547 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-catalog-content\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.933414 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghb65\" (UniqueName: \"kubernetes.io/projected/7ac4d9d7-c104-455a-b162-75b3bbf2a879-kube-api-access-ghb65\") pod \"redhat-operators-nx6vw\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:42 crc kubenswrapper[4721]: I0128 18:36:42.968765 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.273028 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4q59"] Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.312402 4721 generic.go:334] "Generic (PLEG): container finished" podID="cfea609e-20ae-449d-8952-ac4691aaec30" containerID="95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883" exitCode=0 Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.312533 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerDied","Data":"95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883"} Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.328334 4721 generic.go:334] "Generic (PLEG): container finished" podID="4cebab97-f0a5-4073-837b-4d985864ad73" containerID="0c705e247b088e9642472fae79bd623f8ca704a270b73763abf461208aa81aff" exitCode=0 Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.328703 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cebab97-f0a5-4073-837b-4d985864ad73","Type":"ContainerDied","Data":"0c705e247b088e9642472fae79bd623f8ca704a270b73763abf461208aa81aff"} Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.329036 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.748642 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.809368 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.809861 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx6vw"] Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.814115 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wqwcd" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.934603 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db74784c-afbc-482a-8e2d-18c5bb898a9b-config-volume\") pod \"db74784c-afbc-482a-8e2d-18c5bb898a9b\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.934682 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db74784c-afbc-482a-8e2d-18c5bb898a9b-secret-volume\") pod \"db74784c-afbc-482a-8e2d-18c5bb898a9b\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.934727 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcv5n\" (UniqueName: \"kubernetes.io/projected/db74784c-afbc-482a-8e2d-18c5bb898a9b-kube-api-access-dcv5n\") pod \"db74784c-afbc-482a-8e2d-18c5bb898a9b\" (UID: \"db74784c-afbc-482a-8e2d-18c5bb898a9b\") " Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.935941 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db74784c-afbc-482a-8e2d-18c5bb898a9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "db74784c-afbc-482a-8e2d-18c5bb898a9b" (UID: "db74784c-afbc-482a-8e2d-18c5bb898a9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.972214 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db74784c-afbc-482a-8e2d-18c5bb898a9b-kube-api-access-dcv5n" (OuterVolumeSpecName: "kube-api-access-dcv5n") pod "db74784c-afbc-482a-8e2d-18c5bb898a9b" (UID: "db74784c-afbc-482a-8e2d-18c5bb898a9b"). InnerVolumeSpecName "kube-api-access-dcv5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:43 crc kubenswrapper[4721]: I0128 18:36:43.972323 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db74784c-afbc-482a-8e2d-18c5bb898a9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db74784c-afbc-482a-8e2d-18c5bb898a9b" (UID: "db74784c-afbc-482a-8e2d-18c5bb898a9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.037987 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db74784c-afbc-482a-8e2d-18c5bb898a9b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.038027 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db74784c-afbc-482a-8e2d-18c5bb898a9b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.038039 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcv5n\" (UniqueName: \"kubernetes.io/projected/db74784c-afbc-482a-8e2d-18c5bb898a9b-kube-api-access-dcv5n\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.373221 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.373210 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2" event={"ID":"db74784c-afbc-482a-8e2d-18c5bb898a9b","Type":"ContainerDied","Data":"a9bd8cd2b1af3e929113f71a1a1b6952e4cecb85dd2c646215232cbda83addcb"} Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.373781 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9bd8cd2b1af3e929113f71a1a1b6952e4cecb85dd2c646215232cbda83addcb" Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.380208 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerStarted","Data":"e8b3caca6e984df9986a3e5e71f37acefc7d817be54ae2a9bb6331d18260198b"} Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.380266 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerStarted","Data":"0be87c44f1e74be508ed9039f19911c12808dc6414e2af6b6b99d43e0068057d"} Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.382913 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d6d2129-7840-4dc5-941b-541507dfd482" containerID="a6f32e661da9b97ce626db3ca86097f8fa8ac7d805b8d350ccc96832ab330a95" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.383955 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerDied","Data":"a6f32e661da9b97ce626db3ca86097f8fa8ac7d805b8d350ccc96832ab330a95"} Jan 28 18:36:44 crc kubenswrapper[4721]: I0128 18:36:44.383980 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerStarted","Data":"3927c821a13cd79edba1fd9b2f31e1d62752c4e94fd99e7904439dec8005ef1e"} Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.021553 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.058607 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cebab97-f0a5-4073-837b-4d985864ad73-kubelet-dir\") pod \"4cebab97-f0a5-4073-837b-4d985864ad73\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.058694 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cebab97-f0a5-4073-837b-4d985864ad73-kube-api-access\") pod \"4cebab97-f0a5-4073-837b-4d985864ad73\" (UID: \"4cebab97-f0a5-4073-837b-4d985864ad73\") " Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.096482 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cebab97-f0a5-4073-837b-4d985864ad73-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cebab97-f0a5-4073-837b-4d985864ad73" (UID: "4cebab97-f0a5-4073-837b-4d985864ad73"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.106580 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:36:45 crc kubenswrapper[4721]: E0128 18:36:45.108939 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db74784c-afbc-482a-8e2d-18c5bb898a9b" containerName="collect-profiles" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.109188 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="db74784c-afbc-482a-8e2d-18c5bb898a9b" containerName="collect-profiles" Jan 28 18:36:45 crc kubenswrapper[4721]: E0128 18:36:45.109315 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cebab97-f0a5-4073-837b-4d985864ad73" containerName="pruner" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.109390 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cebab97-f0a5-4073-837b-4d985864ad73" containerName="pruner" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.109714 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cebab97-f0a5-4073-837b-4d985864ad73" containerName="pruner" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.109820 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="db74784c-afbc-482a-8e2d-18c5bb898a9b" containerName="collect-profiles" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.110773 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.112737 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cebab97-f0a5-4073-837b-4d985864ad73-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cebab97-f0a5-4073-837b-4d985864ad73" (UID: "4cebab97-f0a5-4073-837b-4d985864ad73"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.114420 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.117024 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.128798 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.160621 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.160753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.160825 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cebab97-f0a5-4073-837b-4d985864ad73-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.160842 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cebab97-f0a5-4073-837b-4d985864ad73-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.261702 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.261814 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.261932 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.283343 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.424746 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cebab97-f0a5-4073-837b-4d985864ad73","Type":"ContainerDied","Data":"b2913bbb76bce0e5cd2da3e787df4c1d00d3a7132aa6dee4e4fc6d227671700c"} Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.424792 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2913bbb76bce0e5cd2da3e787df4c1d00d3a7132aa6dee4e4fc6d227671700c" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.424815 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.433352 4721 generic.go:334] "Generic (PLEG): container finished" podID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerID="e8b3caca6e984df9986a3e5e71f37acefc7d817be54ae2a9bb6331d18260198b" exitCode=0 Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.433421 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerDied","Data":"e8b3caca6e984df9986a3e5e71f37acefc7d817be54ae2a9bb6331d18260198b"} Jan 28 18:36:45 crc kubenswrapper[4721]: I0128 18:36:45.468388 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:46 crc kubenswrapper[4721]: I0128 18:36:46.149948 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:36:46 crc kubenswrapper[4721]: W0128 18:36:46.216759 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda07c36e8_9b10_47c2_a7b7_92a7eaeda153.slice/crio-381cfa0c82480d51beec3bddeb935c5de8c091197c9a89a263d83402bcee8e49 WatchSource:0}: Error finding container 381cfa0c82480d51beec3bddeb935c5de8c091197c9a89a263d83402bcee8e49: Status 404 returned error can't find the container with id 381cfa0c82480d51beec3bddeb935c5de8c091197c9a89a263d83402bcee8e49 Jan 28 18:36:46 crc kubenswrapper[4721]: I0128 18:36:46.305808 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l59vq" Jan 28 18:36:46 crc kubenswrapper[4721]: I0128 18:36:46.455896 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a07c36e8-9b10-47c2-a7b7-92a7eaeda153","Type":"ContainerStarted","Data":"381cfa0c82480d51beec3bddeb935c5de8c091197c9a89a263d83402bcee8e49"} Jan 28 18:36:48 crc kubenswrapper[4721]: I0128 18:36:48.546055 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a07c36e8-9b10-47c2-a7b7-92a7eaeda153","Type":"ContainerStarted","Data":"163d0c61d077e4c66cf57d305144117ec12bbc88f34070530829fdc543746081"} Jan 28 18:36:49 crc kubenswrapper[4721]: I0128 18:36:49.559692 4721 generic.go:334] "Generic (PLEG): container finished" podID="a07c36e8-9b10-47c2-a7b7-92a7eaeda153" containerID="163d0c61d077e4c66cf57d305144117ec12bbc88f34070530829fdc543746081" exitCode=0 Jan 28 18:36:49 crc kubenswrapper[4721]: I0128 18:36:49.559737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a07c36e8-9b10-47c2-a7b7-92a7eaeda153","Type":"ContainerDied","Data":"163d0c61d077e4c66cf57d305144117ec12bbc88f34070530829fdc543746081"} Jan 28 18:36:50 crc kubenswrapper[4721]: I0128 18:36:50.753635 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:36:50 crc kubenswrapper[4721]: I0128 18:36:50.754108 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:36:50 crc kubenswrapper[4721]: I0128 18:36:50.753648 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:36:50 crc kubenswrapper[4721]: I0128 18:36:50.754237 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:36:51 crc kubenswrapper[4721]: I0128 18:36:51.106761 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:51 crc kubenswrapper[4721]: I0128 18:36:51.111413 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:36:52 crc kubenswrapper[4721]: I0128 18:36:52.576849 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:52 crc kubenswrapper[4721]: I0128 18:36:52.608556 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3440038-c980-4fb4-be99-235515ec221c-metrics-certs\") pod \"network-metrics-daemon-jqvck\" (UID: \"f3440038-c980-4fb4-be99-235515ec221c\") " pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:52 crc kubenswrapper[4721]: I0128 18:36:52.755553 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jqvck" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.054321 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.147784 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kubelet-dir\") pod \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.147904 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a07c36e8-9b10-47c2-a7b7-92a7eaeda153" (UID: "a07c36e8-9b10-47c2-a7b7-92a7eaeda153"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.148265 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kube-api-access\") pod \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\" (UID: \"a07c36e8-9b10-47c2-a7b7-92a7eaeda153\") " Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.148525 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.160025 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a07c36e8-9b10-47c2-a7b7-92a7eaeda153" (UID: "a07c36e8-9b10-47c2-a7b7-92a7eaeda153"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.250003 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a07c36e8-9b10-47c2-a7b7-92a7eaeda153-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.634818 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a07c36e8-9b10-47c2-a7b7-92a7eaeda153","Type":"ContainerDied","Data":"381cfa0c82480d51beec3bddeb935c5de8c091197c9a89a263d83402bcee8e49"} Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.634943 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="381cfa0c82480d51beec3bddeb935c5de8c091197c9a89a263d83402bcee8e49" Jan 28 18:36:57 crc kubenswrapper[4721]: I0128 18:36:57.634917 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.701454 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.752893 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.752951 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.753387 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.753341 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.753440 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.754092 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.754146 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.754308 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"3105080a6941d69d02cf7d3da34cf5571a610e7344ba3397ffd6265f06201911"} pod="openshift-console/downloads-7954f5f757-cmtm6" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 18:37:00 crc kubenswrapper[4721]: I0128 18:37:00.754439 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" containerID="cri-o://3105080a6941d69d02cf7d3da34cf5571a610e7344ba3397ffd6265f06201911" gracePeriod=2 Jan 28 18:37:01 crc kubenswrapper[4721]: I0128 18:37:01.225240 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:37:01 crc kubenswrapper[4721]: I0128 18:37:01.225316 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:37:01 crc kubenswrapper[4721]: I0128 18:37:01.676038 4721 generic.go:334] "Generic (PLEG): container finished" podID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerID="3105080a6941d69d02cf7d3da34cf5571a610e7344ba3397ffd6265f06201911" exitCode=0 Jan 28 18:37:01 crc kubenswrapper[4721]: I0128 18:37:01.676092 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cmtm6" event={"ID":"b30c15c2-ac57-4e56-a55b-5b9de02e097f","Type":"ContainerDied","Data":"3105080a6941d69d02cf7d3da34cf5571a610e7344ba3397ffd6265f06201911"} Jan 28 18:37:10 crc kubenswrapper[4721]: I0128 18:37:10.753584 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:10 crc kubenswrapper[4721]: I0128 18:37:10.754382 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:11 crc kubenswrapper[4721]: I0128 18:37:11.245551 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9lrf6" Jan 28 18:37:12 crc kubenswrapper[4721]: I0128 18:37:12.568803 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:37:14 crc kubenswrapper[4721]: E0128 18:37:14.884556 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 18:37:14 crc kubenswrapper[4721]: E0128 18:37:14.885135 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcl56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d4q59_openshift-marketplace(0d6d2129-7840-4dc5-941b-541507dfd482): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:14 crc kubenswrapper[4721]: E0128 18:37:14.886304 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-d4q59" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" Jan 28 18:37:20 crc kubenswrapper[4721]: I0128 18:37:20.754364 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:20 crc kubenswrapper[4721]: I0128 18:37:20.754923 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.461987 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:37:21 crc kubenswrapper[4721]: E0128 18:37:21.462471 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07c36e8-9b10-47c2-a7b7-92a7eaeda153" containerName="pruner" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.462491 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07c36e8-9b10-47c2-a7b7-92a7eaeda153" containerName="pruner" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.462637 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07c36e8-9b10-47c2-a7b7-92a7eaeda153" containerName="pruner" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.464718 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.468461 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.469180 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.469344 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.570372 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/876dbd4f-9cb5-4695-8c96-10e935387cf2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.570495 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/876dbd4f-9cb5-4695-8c96-10e935387cf2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.671763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/876dbd4f-9cb5-4695-8c96-10e935387cf2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.671840 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/876dbd4f-9cb5-4695-8c96-10e935387cf2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.671938 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/876dbd4f-9cb5-4695-8c96-10e935387cf2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.692500 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/876dbd4f-9cb5-4695-8c96-10e935387cf2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:21 crc kubenswrapper[4721]: I0128 18:37:21.786373 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:23 crc kubenswrapper[4721]: E0128 18:37:23.413828 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4q59" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" Jan 28 18:37:25 crc kubenswrapper[4721]: E0128 18:37:25.325552 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 18:37:25 crc kubenswrapper[4721]: E0128 18:37:25.325756 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmsbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7nqgw_openshift-marketplace(791d827f-b809-4f3d-94d0-02a6722550e0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:25 crc kubenswrapper[4721]: E0128 18:37:25.327106 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7nqgw" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.783987 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7nqgw" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" Jan 28 18:37:26 crc kubenswrapper[4721]: I0128 18:37:26.862095 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:37:26 crc kubenswrapper[4721]: I0128 18:37:26.863936 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:26 crc kubenswrapper[4721]: I0128 18:37:26.879358 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.898987 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.899598 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw86b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ktm7m_openshift-marketplace(d093e4ed-b49f-4abb-9cab-67d8072aea98): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.900821 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ktm7m" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.928125 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.928290 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnzk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f9khl_openshift-marketplace(384e21cc-b8a7-4a62-b817-d985bde07d66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.928828 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.928993 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxjs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6k9rr_openshift-marketplace(e1764268-02a2-46af-a94d-b9f32dabcab8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.929798 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f9khl" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" Jan 28 18:37:26 crc kubenswrapper[4721]: E0128 18:37:26.931053 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6k9rr" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" Jan 28 18:37:26 crc kubenswrapper[4721]: I0128 18:37:26.943203 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:26 crc kubenswrapper[4721]: I0128 18:37:26.943249 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-var-lock\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:26 crc kubenswrapper[4721]: I0128 18:37:26.943301 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bab193a-eb38-435d-8a0e-c3199e0abc80-kube-api-access\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.044403 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.044465 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-var-lock\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.044517 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bab193a-eb38-435d-8a0e-c3199e0abc80-kube-api-access\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.044939 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-var-lock\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.044984 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.069473 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bab193a-eb38-435d-8a0e-c3199e0abc80-kube-api-access\") pod \"installer-9-crc\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:27 crc kubenswrapper[4721]: I0128 18:37:27.193969 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.337032 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f9khl" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.337481 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ktm7m" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.337553 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6k9rr" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.351350 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.351505 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgpqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vlsl5_openshift-marketplace(cfea609e-20ae-449d-8952-ac4691aaec30): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.352697 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vlsl5" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.360932 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.361088 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghb65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nx6vw_openshift-marketplace(7ac4d9d7-c104-455a-b162-75b3bbf2a879): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.362300 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nx6vw" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.424934 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.425470 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4n8s9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cql6x_openshift-marketplace(36456b90-3e11-4480-b235-5909103844ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.426667 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cql6x" podUID="36456b90-3e11-4480-b235-5909103844ba" Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.719001 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.834792 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.847101 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jqvck"] Jan 28 18:37:28 crc kubenswrapper[4721]: W0128 18:37:28.852316 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod876dbd4f_9cb5_4695_8c96_10e935387cf2.slice/crio-6bc88ce8efcbbc28242713601c463e63b715255712acbfcde7737583e0eeacec WatchSource:0}: Error finding container 6bc88ce8efcbbc28242713601c463e63b715255712acbfcde7737583e0eeacec: Status 404 returned error can't find the container with id 6bc88ce8efcbbc28242713601c463e63b715255712acbfcde7737583e0eeacec Jan 28 18:37:28 crc kubenswrapper[4721]: W0128 18:37:28.854269 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3440038_c980_4fb4_be99_235515ec221c.slice/crio-d147f530005a4e3c20822601127cb36b1b401f3a8f488e25e00f419ea99bc3fe WatchSource:0}: Error finding container d147f530005a4e3c20822601127cb36b1b401f3a8f488e25e00f419ea99bc3fe: Status 404 returned error can't find the container with id d147f530005a4e3c20822601127cb36b1b401f3a8f488e25e00f419ea99bc3fe Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.873409 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"876dbd4f-9cb5-4695-8c96-10e935387cf2","Type":"ContainerStarted","Data":"6bc88ce8efcbbc28242713601c463e63b715255712acbfcde7737583e0eeacec"} Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.882668 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cmtm6" event={"ID":"b30c15c2-ac57-4e56-a55b-5b9de02e097f","Type":"ContainerStarted","Data":"5da90d39029cb44b84a2137b28b08901d9ac26d4f16b200f2e19cd7b5ee79b49"} Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.883155 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.883568 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.883619 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.891846 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jqvck" event={"ID":"f3440038-c980-4fb4-be99-235515ec221c","Type":"ContainerStarted","Data":"d147f530005a4e3c20822601127cb36b1b401f3a8f488e25e00f419ea99bc3fe"} Jan 28 18:37:28 crc kubenswrapper[4721]: I0128 18:37:28.895286 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bab193a-eb38-435d-8a0e-c3199e0abc80","Type":"ContainerStarted","Data":"b8f55f0d6d722e95cb2827897a45e505ad4d09c409f64ba08deed1355b479006"} Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.898625 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vlsl5" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.899907 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nx6vw" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" Jan 28 18:37:28 crc kubenswrapper[4721]: E0128 18:37:28.925198 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cql6x" podUID="36456b90-3e11-4480-b235-5909103844ba" Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.905559 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jqvck" event={"ID":"f3440038-c980-4fb4-be99-235515ec221c","Type":"ContainerStarted","Data":"42f96da5aa0dc1d61625eaacef79f392062b0fe7c0fef9f2f1725a01a4983d1c"} Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.906278 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jqvck" event={"ID":"f3440038-c980-4fb4-be99-235515ec221c","Type":"ContainerStarted","Data":"ce604ce94cdcd8cf4eb32303174ad58b4258eeebcd18f3a82b6fbe8df50a9459"} Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.908858 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bab193a-eb38-435d-8a0e-c3199e0abc80","Type":"ContainerStarted","Data":"61138e139d78055b92ccc8e5b4e2461482677d4aa337929a6b5f20c3093be023"} Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.910641 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"876dbd4f-9cb5-4695-8c96-10e935387cf2","Type":"ContainerStarted","Data":"e2672c0013aef137e11fc35aec1ed240e456128f952a90e94bd819481bd1f75f"} Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.911300 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.911346 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.926835 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jqvck" podStartSLOduration=179.926808657 podStartE2EDuration="2m59.926808657s" podCreationTimestamp="2026-01-28 18:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:29.922067692 +0000 UTC m=+215.647373272" watchObservedRunningTime="2026-01-28 18:37:29.926808657 +0000 UTC m=+215.652114217" Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.957588 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.957567974 podStartE2EDuration="3.957567974s" podCreationTimestamp="2026-01-28 18:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:29.957381778 +0000 UTC m=+215.682687338" watchObservedRunningTime="2026-01-28 18:37:29.957567974 +0000 UTC m=+215.682873534" Jan 28 18:37:29 crc kubenswrapper[4721]: I0128 18:37:29.958388 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=8.958381378 podStartE2EDuration="8.958381378s" podCreationTimestamp="2026-01-28 18:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:29.943990209 +0000 UTC m=+215.669295769" watchObservedRunningTime="2026-01-28 18:37:29.958381378 +0000 UTC m=+215.683686938" Jan 28 18:37:30 crc kubenswrapper[4721]: I0128 18:37:30.753035 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:30 crc kubenswrapper[4721]: I0128 18:37:30.753436 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:30 crc kubenswrapper[4721]: I0128 18:37:30.753208 4721 patch_prober.go:28] interesting pod/downloads-7954f5f757-cmtm6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 28 18:37:30 crc kubenswrapper[4721]: I0128 18:37:30.753875 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cmtm6" podUID="b30c15c2-ac57-4e56-a55b-5b9de02e097f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 28 18:37:30 crc kubenswrapper[4721]: I0128 18:37:30.921062 4721 generic.go:334] "Generic (PLEG): container finished" podID="876dbd4f-9cb5-4695-8c96-10e935387cf2" containerID="e2672c0013aef137e11fc35aec1ed240e456128f952a90e94bd819481bd1f75f" exitCode=0 Jan 28 18:37:30 crc kubenswrapper[4721]: I0128 18:37:30.921162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"876dbd4f-9cb5-4695-8c96-10e935387cf2","Type":"ContainerDied","Data":"e2672c0013aef137e11fc35aec1ed240e456128f952a90e94bd819481bd1f75f"} Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.225309 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.225554 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.225729 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.226602 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.226748 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522" gracePeriod=600 Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.931807 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522" exitCode=0 Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.931963 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522"} Jan 28 18:37:31 crc kubenswrapper[4721]: I0128 18:37:31.932594 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"10860096ec91e5eac0dde1e9c86fd3c5c5e845b25209bb97d51e42151804a191"} Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.197635 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.220338 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/876dbd4f-9cb5-4695-8c96-10e935387cf2-kubelet-dir\") pod \"876dbd4f-9cb5-4695-8c96-10e935387cf2\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.220572 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/876dbd4f-9cb5-4695-8c96-10e935387cf2-kube-api-access\") pod \"876dbd4f-9cb5-4695-8c96-10e935387cf2\" (UID: \"876dbd4f-9cb5-4695-8c96-10e935387cf2\") " Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.222235 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/876dbd4f-9cb5-4695-8c96-10e935387cf2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "876dbd4f-9cb5-4695-8c96-10e935387cf2" (UID: "876dbd4f-9cb5-4695-8c96-10e935387cf2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.229038 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876dbd4f-9cb5-4695-8c96-10e935387cf2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "876dbd4f-9cb5-4695-8c96-10e935387cf2" (UID: "876dbd4f-9cb5-4695-8c96-10e935387cf2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.321478 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/876dbd4f-9cb5-4695-8c96-10e935387cf2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.321513 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/876dbd4f-9cb5-4695-8c96-10e935387cf2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.940528 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"876dbd4f-9cb5-4695-8c96-10e935387cf2","Type":"ContainerDied","Data":"6bc88ce8efcbbc28242713601c463e63b715255712acbfcde7737583e0eeacec"} Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.940848 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bc88ce8efcbbc28242713601c463e63b715255712acbfcde7737583e0eeacec" Jan 28 18:37:32 crc kubenswrapper[4721]: I0128 18:37:32.940604 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:37:39 crc kubenswrapper[4721]: I0128 18:37:39.991374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerStarted","Data":"c8b821f0856f747c84b73bb7c7765244da2938b90db349c03f3d858d7d176847"} Jan 28 18:37:39 crc kubenswrapper[4721]: I0128 18:37:39.993432 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerStarted","Data":"a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c"} Jan 28 18:37:40 crc kubenswrapper[4721]: I0128 18:37:40.774704 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cmtm6" Jan 28 18:37:41 crc kubenswrapper[4721]: I0128 18:37:41.007056 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d6d2129-7840-4dc5-941b-541507dfd482" containerID="c8b821f0856f747c84b73bb7c7765244da2938b90db349c03f3d858d7d176847" exitCode=0 Jan 28 18:37:41 crc kubenswrapper[4721]: I0128 18:37:41.007136 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerDied","Data":"c8b821f0856f747c84b73bb7c7765244da2938b90db349c03f3d858d7d176847"} Jan 28 18:37:41 crc kubenswrapper[4721]: I0128 18:37:41.012609 4721 generic.go:334] "Generic (PLEG): container finished" podID="791d827f-b809-4f3d-94d0-02a6722550e0" containerID="a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c" exitCode=0 Jan 28 18:37:41 crc kubenswrapper[4721]: I0128 18:37:41.012692 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerDied","Data":"a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c"} Jan 28 18:37:42 crc kubenswrapper[4721]: I0128 18:37:42.034337 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerStarted","Data":"d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5"} Jan 28 18:37:44 crc kubenswrapper[4721]: I0128 18:37:44.047451 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerStarted","Data":"2080634d0a8cd61ebf7da4bc9efe14cbada2488bf22a9018492716f920e1dad5"} Jan 28 18:37:44 crc kubenswrapper[4721]: I0128 18:37:44.049897 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerStarted","Data":"2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8"} Jan 28 18:37:44 crc kubenswrapper[4721]: I0128 18:37:44.053953 4721 generic.go:334] "Generic (PLEG): container finished" podID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerID="d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5" exitCode=0 Jan 28 18:37:44 crc kubenswrapper[4721]: I0128 18:37:44.053989 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerDied","Data":"d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5"} Jan 28 18:37:44 crc kubenswrapper[4721]: I0128 18:37:44.071387 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7nqgw" podStartSLOduration=4.321982309 podStartE2EDuration="1m5.071362889s" podCreationTimestamp="2026-01-28 18:36:39 +0000 UTC" firstStartedPulling="2026-01-28 18:36:41.195987302 +0000 UTC m=+166.921292862" lastFinishedPulling="2026-01-28 18:37:41.945367882 +0000 UTC m=+227.670673442" observedRunningTime="2026-01-28 18:37:44.067289286 +0000 UTC m=+229.792594856" watchObservedRunningTime="2026-01-28 18:37:44.071362889 +0000 UTC m=+229.796668449" Jan 28 18:37:49 crc kubenswrapper[4721]: I0128 18:37:49.778114 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:37:49 crc kubenswrapper[4721]: I0128 18:37:49.778688 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:37:50 crc kubenswrapper[4721]: I0128 18:37:50.570818 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:37:50 crc kubenswrapper[4721]: I0128 18:37:50.595712 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4q59" podStartSLOduration=10.910710852 podStartE2EDuration="1m8.595693011s" podCreationTimestamp="2026-01-28 18:36:42 +0000 UTC" firstStartedPulling="2026-01-28 18:36:44.385464743 +0000 UTC m=+170.110770303" lastFinishedPulling="2026-01-28 18:37:42.070446902 +0000 UTC m=+227.795752462" observedRunningTime="2026-01-28 18:37:45.099231118 +0000 UTC m=+230.824536688" watchObservedRunningTime="2026-01-28 18:37:50.595693011 +0000 UTC m=+236.320998571" Jan 28 18:37:50 crc kubenswrapper[4721]: I0128 18:37:50.613842 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:37:51 crc kubenswrapper[4721]: I0128 18:37:51.755048 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7nqgw"] Jan 28 18:37:52 crc kubenswrapper[4721]: I0128 18:37:52.118068 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7nqgw" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="registry-server" containerID="cri-o://2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8" gracePeriod=2 Jan 28 18:37:52 crc kubenswrapper[4721]: I0128 18:37:52.688979 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:37:52 crc kubenswrapper[4721]: I0128 18:37:52.689027 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:37:52 crc kubenswrapper[4721]: I0128 18:37:52.727060 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:37:53 crc kubenswrapper[4721]: I0128 18:37:53.162036 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.031760 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.130721 4721 generic.go:334] "Generic (PLEG): container finished" podID="791d827f-b809-4f3d-94d0-02a6722550e0" containerID="2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8" exitCode=0 Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.130788 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7nqgw" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.130828 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerDied","Data":"2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8"} Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.130883 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7nqgw" event={"ID":"791d827f-b809-4f3d-94d0-02a6722550e0","Type":"ContainerDied","Data":"cc8a6281477ea8b7bf7199271938a8a825203834209f3d73e4a4f3f62388eee6"} Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.130911 4721 scope.go:117] "RemoveContainer" containerID="2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.142337 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmsbc\" (UniqueName: \"kubernetes.io/projected/791d827f-b809-4f3d-94d0-02a6722550e0-kube-api-access-rmsbc\") pod \"791d827f-b809-4f3d-94d0-02a6722550e0\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.142425 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-catalog-content\") pod \"791d827f-b809-4f3d-94d0-02a6722550e0\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.142454 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-utilities\") pod \"791d827f-b809-4f3d-94d0-02a6722550e0\" (UID: \"791d827f-b809-4f3d-94d0-02a6722550e0\") " Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.143598 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-utilities" (OuterVolumeSpecName: "utilities") pod "791d827f-b809-4f3d-94d0-02a6722550e0" (UID: "791d827f-b809-4f3d-94d0-02a6722550e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.149662 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791d827f-b809-4f3d-94d0-02a6722550e0-kube-api-access-rmsbc" (OuterVolumeSpecName: "kube-api-access-rmsbc") pod "791d827f-b809-4f3d-94d0-02a6722550e0" (UID: "791d827f-b809-4f3d-94d0-02a6722550e0"). InnerVolumeSpecName "kube-api-access-rmsbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.189443 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "791d827f-b809-4f3d-94d0-02a6722550e0" (UID: "791d827f-b809-4f3d-94d0-02a6722550e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.243678 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmsbc\" (UniqueName: \"kubernetes.io/projected/791d827f-b809-4f3d-94d0-02a6722550e0-kube-api-access-rmsbc\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.243725 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.243739 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791d827f-b809-4f3d-94d0-02a6722550e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.459524 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7nqgw"] Jan 28 18:37:54 crc kubenswrapper[4721]: I0128 18:37:54.465827 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7nqgw"] Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.393817 4721 scope.go:117] "RemoveContainer" containerID="a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.505063 4721 scope.go:117] "RemoveContainer" containerID="5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.538819 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" path="/var/lib/kubelet/pods/791d827f-b809-4f3d-94d0-02a6722550e0/volumes" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.557115 4721 scope.go:117] "RemoveContainer" containerID="2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8" Jan 28 18:37:55 crc kubenswrapper[4721]: E0128 18:37:55.557843 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8\": container with ID starting with 2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8 not found: ID does not exist" containerID="2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.557896 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8"} err="failed to get container status \"2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8\": rpc error: code = NotFound desc = could not find container \"2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8\": container with ID starting with 2051836d7a63b705802f4f651051457be77620f7b83a3dee9537be1d671559a8 not found: ID does not exist" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.557929 4721 scope.go:117] "RemoveContainer" containerID="a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c" Jan 28 18:37:55 crc kubenswrapper[4721]: E0128 18:37:55.558334 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c\": container with ID starting with a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c not found: ID does not exist" containerID="a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.558386 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c"} err="failed to get container status \"a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c\": rpc error: code = NotFound desc = could not find container \"a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c\": container with ID starting with a69e4b2e7a7f3ddc6c97a2e3b0016f5a26b20ea44de3463ca8d5b3887443d77c not found: ID does not exist" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.558465 4721 scope.go:117] "RemoveContainer" containerID="5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10" Jan 28 18:37:55 crc kubenswrapper[4721]: E0128 18:37:55.559488 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10\": container with ID starting with 5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10 not found: ID does not exist" containerID="5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10" Jan 28 18:37:55 crc kubenswrapper[4721]: I0128 18:37:55.559535 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10"} err="failed to get container status \"5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10\": rpc error: code = NotFound desc = could not find container \"5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10\": container with ID starting with 5b23939c5dc277d8860bb7edcae76a48217d083118f737a26f18f4edca437e10 not found: ID does not exist" Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.152290 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerStarted","Data":"8400105d8ec8b30cb1e4583431a4b7b606cd879d7ee50f10f30f3e00ae655c58"} Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.157920 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerStarted","Data":"1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806"} Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.160116 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerStarted","Data":"10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41"} Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.162541 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerStarted","Data":"d24051cf5d6bd62982863fc9b7a15142560e14c6ed128af69bc8f9bbf7279dba"} Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.165981 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerStarted","Data":"84984f7a09c278bbcbda1504ebba2e4b03e177c8aeb88f856330944b79632fb5"} Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.168071 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerStarted","Data":"66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8"} Jan 28 18:37:56 crc kubenswrapper[4721]: I0128 18:37:56.271921 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9khl" podStartSLOduration=3.651406145 podStartE2EDuration="1m17.271901448s" podCreationTimestamp="2026-01-28 18:36:39 +0000 UTC" firstStartedPulling="2026-01-28 18:36:41.197299393 +0000 UTC m=+166.922604953" lastFinishedPulling="2026-01-28 18:37:54.817794706 +0000 UTC m=+240.543100256" observedRunningTime="2026-01-28 18:37:56.268740251 +0000 UTC m=+241.994045831" watchObservedRunningTime="2026-01-28 18:37:56.271901448 +0000 UTC m=+241.997207008" Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.175831 4721 generic.go:334] "Generic (PLEG): container finished" podID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerID="1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806" exitCode=0 Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.175920 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerDied","Data":"1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806"} Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.178848 4721 generic.go:334] "Generic (PLEG): container finished" podID="cfea609e-20ae-449d-8952-ac4691aaec30" containerID="10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41" exitCode=0 Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.178913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerDied","Data":"10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41"} Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.183388 4721 generic.go:334] "Generic (PLEG): container finished" podID="36456b90-3e11-4480-b235-5909103844ba" containerID="84984f7a09c278bbcbda1504ebba2e4b03e177c8aeb88f856330944b79632fb5" exitCode=0 Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.183459 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerDied","Data":"84984f7a09c278bbcbda1504ebba2e4b03e177c8aeb88f856330944b79632fb5"} Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.185604 4721 generic.go:334] "Generic (PLEG): container finished" podID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerID="8400105d8ec8b30cb1e4583431a4b7b606cd879d7ee50f10f30f3e00ae655c58" exitCode=0 Jan 28 18:37:57 crc kubenswrapper[4721]: I0128 18:37:57.185626 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerDied","Data":"8400105d8ec8b30cb1e4583431a4b7b606cd879d7ee50f10f30f3e00ae655c58"} Jan 28 18:37:58 crc kubenswrapper[4721]: E0128 18:37:58.009663 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac4d9d7_c104_455a_b162_75b3bbf2a879.slice/crio-d24051cf5d6bd62982863fc9b7a15142560e14c6ed128af69bc8f9bbf7279dba.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.193205 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerStarted","Data":"a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca"} Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.196082 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerStarted","Data":"931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107"} Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.198799 4721 generic.go:334] "Generic (PLEG): container finished" podID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerID="d24051cf5d6bd62982863fc9b7a15142560e14c6ed128af69bc8f9bbf7279dba" exitCode=0 Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.198871 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerDied","Data":"d24051cf5d6bd62982863fc9b7a15142560e14c6ed128af69bc8f9bbf7279dba"} Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.202502 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerStarted","Data":"b1c084b5ea1ca4855f89bf6c6c16d4e8214f0fa646e47650dfc5757bb8d21fc0"} Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.210106 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerStarted","Data":"1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee"} Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.218775 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ktm7m" podStartSLOduration=2.808482198 podStartE2EDuration="1m19.218724609s" podCreationTimestamp="2026-01-28 18:36:39 +0000 UTC" firstStartedPulling="2026-01-28 18:36:41.185085949 +0000 UTC m=+166.910391509" lastFinishedPulling="2026-01-28 18:37:57.59532836 +0000 UTC m=+243.320633920" observedRunningTime="2026-01-28 18:37:58.215918124 +0000 UTC m=+243.941223694" watchObservedRunningTime="2026-01-28 18:37:58.218724609 +0000 UTC m=+243.944030169" Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.253505 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cql6x" podStartSLOduration=2.86820392 podStartE2EDuration="1m18.253480477s" podCreationTimestamp="2026-01-28 18:36:40 +0000 UTC" firstStartedPulling="2026-01-28 18:36:42.244337757 +0000 UTC m=+167.969643317" lastFinishedPulling="2026-01-28 18:37:57.629614314 +0000 UTC m=+243.354919874" observedRunningTime="2026-01-28 18:37:58.248728993 +0000 UTC m=+243.974034553" watchObservedRunningTime="2026-01-28 18:37:58.253480477 +0000 UTC m=+243.978786037" Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.268132 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6k9rr" podStartSLOduration=3.720807359 podStartE2EDuration="1m20.268104973s" podCreationTimestamp="2026-01-28 18:36:38 +0000 UTC" firstStartedPulling="2026-01-28 18:36:41.199710926 +0000 UTC m=+166.925016486" lastFinishedPulling="2026-01-28 18:37:57.74700854 +0000 UTC m=+243.472314100" observedRunningTime="2026-01-28 18:37:58.266584747 +0000 UTC m=+243.991890307" watchObservedRunningTime="2026-01-28 18:37:58.268104973 +0000 UTC m=+243.993410533" Jan 28 18:37:58 crc kubenswrapper[4721]: I0128 18:37:58.290323 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vlsl5" podStartSLOduration=2.869026772 podStartE2EDuration="1m17.290286589s" podCreationTimestamp="2026-01-28 18:36:41 +0000 UTC" firstStartedPulling="2026-01-28 18:36:43.325055472 +0000 UTC m=+169.050361022" lastFinishedPulling="2026-01-28 18:37:57.746315279 +0000 UTC m=+243.471620839" observedRunningTime="2026-01-28 18:37:58.287135872 +0000 UTC m=+244.012441442" watchObservedRunningTime="2026-01-28 18:37:58.290286589 +0000 UTC m=+244.015592149" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.217609 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerStarted","Data":"2612160059c4e82d7ea37c966930f8e2f5dc64664f4182c5ee54a9707750349d"} Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.241586 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nx6vw" podStartSLOduration=4.025271765 podStartE2EDuration="1m17.241562224s" podCreationTimestamp="2026-01-28 18:36:42 +0000 UTC" firstStartedPulling="2026-01-28 18:36:45.457260224 +0000 UTC m=+171.182565784" lastFinishedPulling="2026-01-28 18:37:58.673550683 +0000 UTC m=+244.398856243" observedRunningTime="2026-01-28 18:37:59.237017496 +0000 UTC m=+244.962323056" watchObservedRunningTime="2026-01-28 18:37:59.241562224 +0000 UTC m=+244.966867774" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.246546 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.246891 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.388635 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.388717 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.432381 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.553265 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:37:59 crc kubenswrapper[4721]: I0128 18:37:59.553376 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:38:00 crc kubenswrapper[4721]: I0128 18:38:00.264417 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:38:00 crc kubenswrapper[4721]: I0128 18:38:00.313499 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6k9rr" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:00 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:00 crc kubenswrapper[4721]: > Jan 28 18:38:00 crc kubenswrapper[4721]: I0128 18:38:00.386550 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtc8t"] Jan 28 18:38:00 crc kubenswrapper[4721]: I0128 18:38:00.592074 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ktm7m" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:00 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:00 crc kubenswrapper[4721]: > Jan 28 18:38:01 crc kubenswrapper[4721]: I0128 18:38:01.470310 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:38:01 crc kubenswrapper[4721]: I0128 18:38:01.470614 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:38:01 crc kubenswrapper[4721]: I0128 18:38:01.514424 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:38:01 crc kubenswrapper[4721]: I0128 18:38:01.622615 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:38:01 crc kubenswrapper[4721]: I0128 18:38:01.622695 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:38:01 crc kubenswrapper[4721]: I0128 18:38:01.661788 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:38:02 crc kubenswrapper[4721]: I0128 18:38:02.274920 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:38:02 crc kubenswrapper[4721]: I0128 18:38:02.275680 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:38:02 crc kubenswrapper[4721]: I0128 18:38:02.970190 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:38:02 crc kubenswrapper[4721]: I0128 18:38:02.970264 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:38:03 crc kubenswrapper[4721]: I0128 18:38:03.957901 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlsl5"] Jan 28 18:38:04 crc kubenswrapper[4721]: I0128 18:38:04.003736 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nx6vw" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:04 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:04 crc kubenswrapper[4721]: > Jan 28 18:38:04 crc kubenswrapper[4721]: I0128 18:38:04.247185 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vlsl5" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="registry-server" containerID="cri-o://931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107" gracePeriod=2 Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.218478 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.256113 4721 generic.go:334] "Generic (PLEG): container finished" podID="cfea609e-20ae-449d-8952-ac4691aaec30" containerID="931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107" exitCode=0 Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.256154 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerDied","Data":"931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107"} Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.256223 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlsl5" event={"ID":"cfea609e-20ae-449d-8952-ac4691aaec30","Type":"ContainerDied","Data":"a0afc2cf54480fc4811aac793213838b21acbe684163f4397ace0068f5baccb1"} Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.256235 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlsl5" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.256245 4721 scope.go:117] "RemoveContainer" containerID="931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.271796 4721 scope.go:117] "RemoveContainer" containerID="10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.285882 4721 scope.go:117] "RemoveContainer" containerID="95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.303023 4721 scope.go:117] "RemoveContainer" containerID="931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107" Jan 28 18:38:05 crc kubenswrapper[4721]: E0128 18:38:05.303429 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107\": container with ID starting with 931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107 not found: ID does not exist" containerID="931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.303469 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107"} err="failed to get container status \"931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107\": rpc error: code = NotFound desc = could not find container \"931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107\": container with ID starting with 931173fa635918c0e0326d64d9cd64e7f9a014acbc036a66e594390c9bea4107 not found: ID does not exist" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.303503 4721 scope.go:117] "RemoveContainer" containerID="10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41" Jan 28 18:38:05 crc kubenswrapper[4721]: E0128 18:38:05.303898 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41\": container with ID starting with 10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41 not found: ID does not exist" containerID="10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.303928 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41"} err="failed to get container status \"10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41\": rpc error: code = NotFound desc = could not find container \"10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41\": container with ID starting with 10f5d74f926c5aa6cba02ab9059a6124312d032177274fdfd5f0d2d8f32bda41 not found: ID does not exist" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.303946 4721 scope.go:117] "RemoveContainer" containerID="95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883" Jan 28 18:38:05 crc kubenswrapper[4721]: E0128 18:38:05.304378 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883\": container with ID starting with 95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883 not found: ID does not exist" containerID="95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.304477 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883"} err="failed to get container status \"95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883\": rpc error: code = NotFound desc = could not find container \"95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883\": container with ID starting with 95987c7c4701d7ff0a1916d29e9cef6d36db9e5f00148cea49cc66cebbea6883 not found: ID does not exist" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.307857 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgpqd\" (UniqueName: \"kubernetes.io/projected/cfea609e-20ae-449d-8952-ac4691aaec30-kube-api-access-lgpqd\") pod \"cfea609e-20ae-449d-8952-ac4691aaec30\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.307903 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-utilities\") pod \"cfea609e-20ae-449d-8952-ac4691aaec30\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.307958 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-catalog-content\") pod \"cfea609e-20ae-449d-8952-ac4691aaec30\" (UID: \"cfea609e-20ae-449d-8952-ac4691aaec30\") " Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.309182 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-utilities" (OuterVolumeSpecName: "utilities") pod "cfea609e-20ae-449d-8952-ac4691aaec30" (UID: "cfea609e-20ae-449d-8952-ac4691aaec30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.313593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfea609e-20ae-449d-8952-ac4691aaec30-kube-api-access-lgpqd" (OuterVolumeSpecName: "kube-api-access-lgpqd") pod "cfea609e-20ae-449d-8952-ac4691aaec30" (UID: "cfea609e-20ae-449d-8952-ac4691aaec30"). InnerVolumeSpecName "kube-api-access-lgpqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.332479 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfea609e-20ae-449d-8952-ac4691aaec30" (UID: "cfea609e-20ae-449d-8952-ac4691aaec30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.408753 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgpqd\" (UniqueName: \"kubernetes.io/projected/cfea609e-20ae-449d-8952-ac4691aaec30-kube-api-access-lgpqd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.408796 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.408816 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfea609e-20ae-449d-8952-ac4691aaec30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.579846 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlsl5"] Jan 28 18:38:05 crc kubenswrapper[4721]: I0128 18:38:05.582406 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlsl5"] Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204104 4721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204660 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="extract-content" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204676 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="extract-content" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204686 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="extract-utilities" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204692 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="extract-utilities" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204699 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="extract-content" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204705 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="extract-content" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204718 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="registry-server" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204723 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="registry-server" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204735 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="extract-utilities" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204740 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="extract-utilities" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204750 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876dbd4f-9cb5-4695-8c96-10e935387cf2" containerName="pruner" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204758 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="876dbd4f-9cb5-4695-8c96-10e935387cf2" containerName="pruner" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.204766 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="registry-server" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204772 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="registry-server" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204879 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" containerName="registry-server" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204897 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="876dbd4f-9cb5-4695-8c96-10e935387cf2" containerName="pruner" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.204905 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="791d827f-b809-4f3d-94d0-02a6722550e0" containerName="registry-server" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205239 4721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205352 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205607 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7" gracePeriod=15 Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205630 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed" gracePeriod=15 Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205651 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae" gracePeriod=15 Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205688 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05" gracePeriod=15 Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.205694 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026" gracePeriod=15 Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206345 4721 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206685 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206695 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206706 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206714 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206724 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206731 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206742 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206749 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206759 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206764 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206775 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206781 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206791 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206797 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:38:07 crc kubenswrapper[4721]: E0128 18:38:07.206808 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206817 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206932 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206946 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206958 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206967 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206977 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206987 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.206999 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233496 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233566 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233606 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233635 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233658 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233703 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233732 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.233784 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.335128 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.335759 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.335883 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336019 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336036 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336186 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336234 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336298 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336319 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336349 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336474 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336537 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336567 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.336596 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.337117 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:07 crc kubenswrapper[4721]: I0128 18:38:07.537640 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfea609e-20ae-449d-8952-ac4691aaec30" path="/var/lib/kubelet/pods/cfea609e-20ae-449d-8952-ac4691aaec30/volumes" Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.276329 4721 generic.go:334] "Generic (PLEG): container finished" podID="3bab193a-eb38-435d-8a0e-c3199e0abc80" containerID="61138e139d78055b92ccc8e5b4e2461482677d4aa337929a6b5f20c3093be023" exitCode=0 Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.276400 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bab193a-eb38-435d-8a0e-c3199e0abc80","Type":"ContainerDied","Data":"61138e139d78055b92ccc8e5b4e2461482677d4aa337929a6b5f20c3093be023"} Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.277643 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.280048 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.281325 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.282067 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed" exitCode=0 Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.282092 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026" exitCode=0 Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.282101 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae" exitCode=0 Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.282110 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05" exitCode=2 Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.282180 4721 scope.go:117] "RemoveContainer" containerID="ca3243d75b5ce4aff178d82c45c7d1f765013233b5736600151adb4801aa462b" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.571724 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.572163 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.572745 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.573040 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.573376 4721 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:08 crc kubenswrapper[4721]: I0128 18:38:08.573409 4721 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.573658 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="200ms" Jan 28 18:38:08 crc kubenswrapper[4721]: E0128 18:38:08.774549 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="400ms" Jan 28 18:38:09 crc kubenswrapper[4721]: E0128 18:38:09.175859 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="800ms" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.320841 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.346110 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.348266 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.350587 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.418210 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.418791 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.419028 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.584829 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.586634 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.587504 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.587967 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.588498 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.589942 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.590351 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.590714 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.591386 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.595232 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.596201 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.596679 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.597304 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.597830 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.634459 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.635037 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.635320 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.635579 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.635839 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666559 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666658 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-var-lock\") pod \"3bab193a-eb38-435d-8a0e-c3199e0abc80\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666682 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-kubelet-dir\") pod \"3bab193a-eb38-435d-8a0e-c3199e0abc80\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666689 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666709 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666722 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-var-lock" (OuterVolumeSpecName: "var-lock") pod "3bab193a-eb38-435d-8a0e-c3199e0abc80" (UID: "3bab193a-eb38-435d-8a0e-c3199e0abc80"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666730 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666771 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bab193a-eb38-435d-8a0e-c3199e0abc80-kube-api-access\") pod \"3bab193a-eb38-435d-8a0e-c3199e0abc80\" (UID: \"3bab193a-eb38-435d-8a0e-c3199e0abc80\") " Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666818 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666816 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3bab193a-eb38-435d-8a0e-c3199e0abc80" (UID: "3bab193a-eb38-435d-8a0e-c3199e0abc80"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.666921 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.667033 4721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.667049 4721 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.667064 4721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.667075 4721 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bab193a-eb38-435d-8a0e-c3199e0abc80-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.667086 4721 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.671956 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bab193a-eb38-435d-8a0e-c3199e0abc80-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3bab193a-eb38-435d-8a0e-c3199e0abc80" (UID: "3bab193a-eb38-435d-8a0e-c3199e0abc80"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4721]: I0128 18:38:09.767739 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bab193a-eb38-435d-8a0e-c3199e0abc80-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4721]: E0128 18:38:09.977399 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="1.6s" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.334502 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bab193a-eb38-435d-8a0e-c3199e0abc80","Type":"ContainerDied","Data":"b8f55f0d6d722e95cb2827897a45e505ad4d09c409f64ba08deed1355b479006"} Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.334897 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f55f0d6d722e95cb2827897a45e505ad4d09c409f64ba08deed1355b479006" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.334525 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.337814 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.338814 4721 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7" exitCode=0 Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.338872 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.339093 4721 scope.go:117] "RemoveContainer" containerID="0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.349495 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.350007 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.350297 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.350510 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.353766 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.359423 4721 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.359853 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.360164 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.361221 4721 scope.go:117] "RemoveContainer" containerID="90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.375941 4721 scope.go:117] "RemoveContainer" containerID="93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.395546 4721 scope.go:117] "RemoveContainer" containerID="486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.409489 4721 scope.go:117] "RemoveContainer" containerID="4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.423638 4721 scope.go:117] "RemoveContainer" containerID="1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.440994 4721 scope.go:117] "RemoveContainer" containerID="0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed" Jan 28 18:38:10 crc kubenswrapper[4721]: E0128 18:38:10.443877 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\": container with ID starting with 0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed not found: ID does not exist" containerID="0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.443955 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed"} err="failed to get container status \"0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\": rpc error: code = NotFound desc = could not find container \"0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed\": container with ID starting with 0339ccd2bd809dc715d0e334b596c5abdebca1af6ac5f7afefa81aa8baa470ed not found: ID does not exist" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.443986 4721 scope.go:117] "RemoveContainer" containerID="90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026" Jan 28 18:38:10 crc kubenswrapper[4721]: E0128 18:38:10.444609 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\": container with ID starting with 90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026 not found: ID does not exist" containerID="90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.444674 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026"} err="failed to get container status \"90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\": rpc error: code = NotFound desc = could not find container \"90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026\": container with ID starting with 90f1279d8262e042527207dbe56a1d08ba554650510bebfc53bd24cd84532026 not found: ID does not exist" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.444689 4721 scope.go:117] "RemoveContainer" containerID="93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae" Jan 28 18:38:10 crc kubenswrapper[4721]: E0128 18:38:10.445008 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\": container with ID starting with 93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae not found: ID does not exist" containerID="93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.445044 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae"} err="failed to get container status \"93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\": rpc error: code = NotFound desc = could not find container \"93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae\": container with ID starting with 93950542dfa7bda5686ef6d8ff927d14baafd149893c732658e9aa3916be64ae not found: ID does not exist" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.445071 4721 scope.go:117] "RemoveContainer" containerID="486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05" Jan 28 18:38:10 crc kubenswrapper[4721]: E0128 18:38:10.446505 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\": container with ID starting with 486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05 not found: ID does not exist" containerID="486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.446566 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05"} err="failed to get container status \"486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\": rpc error: code = NotFound desc = could not find container \"486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05\": container with ID starting with 486a6284a1e3930aac13368062b8796b4c271214dd08cb60c7ae2ce7b4a45a05 not found: ID does not exist" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.446606 4721 scope.go:117] "RemoveContainer" containerID="4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7" Jan 28 18:38:10 crc kubenswrapper[4721]: E0128 18:38:10.446909 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\": container with ID starting with 4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7 not found: ID does not exist" containerID="4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.446937 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7"} err="failed to get container status \"4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\": rpc error: code = NotFound desc = could not find container \"4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7\": container with ID starting with 4a345c3bb49064600db61f16588229b6877293710e6c21c15d91746c2f1d50b7 not found: ID does not exist" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.446956 4721 scope.go:117] "RemoveContainer" containerID="1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585" Jan 28 18:38:10 crc kubenswrapper[4721]: E0128 18:38:10.447625 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\": container with ID starting with 1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585 not found: ID does not exist" containerID="1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585" Jan 28 18:38:10 crc kubenswrapper[4721]: I0128 18:38:10.447658 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585"} err="failed to get container status \"1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\": rpc error: code = NotFound desc = could not find container \"1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585\": container with ID starting with 1499c7b625ec721799e4cf54e3de39ab75a752bbb0450a8eeffcdb6259f8a585 not found: ID does not exist" Jan 28 18:38:11 crc kubenswrapper[4721]: I0128 18:38:11.536130 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 18:38:11 crc kubenswrapper[4721]: E0128 18:38:11.579023 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="3.2s" Jan 28 18:38:12 crc kubenswrapper[4721]: E0128 18:38:12.237829 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:12 crc kubenswrapper[4721]: I0128 18:38:12.238307 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:12 crc kubenswrapper[4721]: E0128 18:38:12.303550 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ef902663264c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:38:12.303013057 +0000 UTC m=+258.028318617,LastTimestamp:2026-01-28 18:38:12.303013057 +0000 UTC m=+258.028318617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:38:12 crc kubenswrapper[4721]: I0128 18:38:12.353571 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8aa8e1579b6ec6e129f53a65e839848e929afdb22a7b296755158c435b361229"} Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.009039 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.009473 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.009850 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.010382 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.010718 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.045746 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.046488 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.046771 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.046959 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.047126 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.359474 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2d74681de970c7e709fa9d17c39e5cf88f883f6fa6c7b831d6db8760e700649f"} Jan 28 18:38:13 crc kubenswrapper[4721]: E0128 18:38:13.359892 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.360386 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.360741 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.361007 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: I0128 18:38:13.361295 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:13 crc kubenswrapper[4721]: E0128 18:38:13.595370 4721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ef902663264c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:38:12.303013057 +0000 UTC m=+258.028318617,LastTimestamp:2026-01-28 18:38:12.303013057 +0000 UTC m=+258.028318617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:38:14 crc kubenswrapper[4721]: E0128 18:38:14.366076 4721 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:38:14 crc kubenswrapper[4721]: E0128 18:38:14.780229 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="6.4s" Jan 28 18:38:15 crc kubenswrapper[4721]: I0128 18:38:15.540057 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:15 crc kubenswrapper[4721]: I0128 18:38:15.540461 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:15 crc kubenswrapper[4721]: I0128 18:38:15.540770 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:15 crc kubenswrapper[4721]: I0128 18:38:15.541301 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:20 crc kubenswrapper[4721]: E0128 18:38:20.391592 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:38:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:38:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:38:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:38:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:20 crc kubenswrapper[4721]: E0128 18:38:20.393964 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:20 crc kubenswrapper[4721]: E0128 18:38:20.394507 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:20 crc kubenswrapper[4721]: E0128 18:38:20.394713 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:20 crc kubenswrapper[4721]: E0128 18:38:20.394885 4721 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:20 crc kubenswrapper[4721]: E0128 18:38:20.394905 4721 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:38:21 crc kubenswrapper[4721]: E0128 18:38:21.181340 4721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="7s" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.528983 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.530107 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.530871 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.531253 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.531511 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.548989 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.549043 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:21 crc kubenswrapper[4721]: E0128 18:38:21.549769 4721 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:21 crc kubenswrapper[4721]: I0128 18:38:21.550343 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.406504 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.406770 4721 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d" exitCode=1 Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.406829 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d"} Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.407383 4721 scope.go:117] "RemoveContainer" containerID="3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.407601 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.407840 4721 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408264 4721 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f340a714eb24b4f46f701f8f7a81d5c51fb7609410a83c86fddbf7c7ea0489a4" exitCode=0 Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408293 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f340a714eb24b4f46f701f8f7a81d5c51fb7609410a83c86fddbf7c7ea0489a4"} Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408332 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3f3865a127ec3f45d569984b2a77d7feb73483434c36be08e2ec2dc367d35231"} Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408311 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408585 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408606 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.408947 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: E0128 18:38:22.409069 4721 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.409414 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.409655 4721 status_manager.go:851] "Failed to get status for pod" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" pod="openshift-marketplace/certified-operators-ktm7m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ktm7m\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.409844 4721 status_manager.go:851] "Failed to get status for pod" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" pod="openshift-marketplace/certified-operators-6k9rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6k9rr\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.410082 4721 status_manager.go:851] "Failed to get status for pod" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" pod="openshift-marketplace/redhat-operators-nx6vw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nx6vw\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.410324 4721 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:22 crc kubenswrapper[4721]: I0128 18:38:22.410541 4721 status_manager.go:851] "Failed to get status for pod" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 28 18:38:23 crc kubenswrapper[4721]: I0128 18:38:23.435589 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dfb604c2c5c81ebe569da2c7d1d9de1789e97994f5ab3b9d0fa1c3f3d75ca1d1"} Jan 28 18:38:23 crc kubenswrapper[4721]: I0128 18:38:23.435978 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ccf36d91a1d0897b00421b7b13fb50e424f033f87b3860d7fe51209146d4a9eb"} Jan 28 18:38:23 crc kubenswrapper[4721]: I0128 18:38:23.439789 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:38:23 crc kubenswrapper[4721]: I0128 18:38:23.439838 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc002f895e45cf45d69fe433bcd7cd682d650727637ac217b294c0d3fa29c895"} Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.449754 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"244696c21373be7bc88b1fffa820c1c2ee00e96c7ad287ff60abf0aa0f423153"} Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.450062 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63eeda5eccba43b87745b83f88925d6268763df71837a4529d055b3d6a2da502"} Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.450081 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5f8ded43947f0f4c6f037b4d1e05ae9c743d8f17b6bf354c04fd703f8026ed3"} Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.450095 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.450056 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.450114 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.578897 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.579124 4721 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:38:24 crc kubenswrapper[4721]: I0128 18:38:24.579214 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.416960 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" podUID="26a0a4f9-321f-4196-88ce-888b82380eb6" containerName="oauth-openshift" containerID="cri-o://74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d" gracePeriod=15 Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.816459 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954104 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-policies\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954149 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-trusted-ca-bundle\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954204 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-session\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954262 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-router-certs\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-error\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954323 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-idp-0-file-data\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954355 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-ocp-branding-template\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954421 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-provider-selection\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954444 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-serving-cert\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954489 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-login\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954514 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h5lx\" (UniqueName: \"kubernetes.io/projected/26a0a4f9-321f-4196-88ce-888b82380eb6-kube-api-access-9h5lx\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954547 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-service-ca\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954568 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-dir\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.954590 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-cliconfig\") pod \"26a0a4f9-321f-4196-88ce-888b82380eb6\" (UID: \"26a0a4f9-321f-4196-88ce-888b82380eb6\") " Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.955543 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.955554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.955596 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.955618 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.956111 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961569 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961617 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961626 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a0a4f9-321f-4196-88ce-888b82380eb6-kube-api-access-9h5lx" (OuterVolumeSpecName: "kube-api-access-9h5lx") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "kube-api-access-9h5lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961641 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961680 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961698 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961714 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.961738 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:25 crc kubenswrapper[4721]: I0128 18:38:25.962126 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "26a0a4f9-321f-4196-88ce-888b82380eb6" (UID: "26a0a4f9-321f-4196-88ce-888b82380eb6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055688 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055802 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055823 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055846 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h5lx\" (UniqueName: \"kubernetes.io/projected/26a0a4f9-321f-4196-88ce-888b82380eb6-kube-api-access-9h5lx\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055867 4721 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055885 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.055905 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056517 4721 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056532 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056544 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056560 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056580 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056595 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.056608 4721 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26a0a4f9-321f-4196-88ce-888b82380eb6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.461366 4721 generic.go:334] "Generic (PLEG): container finished" podID="26a0a4f9-321f-4196-88ce-888b82380eb6" containerID="74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d" exitCode=0 Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.461432 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" event={"ID":"26a0a4f9-321f-4196-88ce-888b82380eb6","Type":"ContainerDied","Data":"74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d"} Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.461438 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.461470 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtc8t" event={"ID":"26a0a4f9-321f-4196-88ce-888b82380eb6","Type":"ContainerDied","Data":"af11226bbc0d918756d95ad6640ed161c6108d5c056105252954bb45309bf350"} Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.461492 4721 scope.go:117] "RemoveContainer" containerID="74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.487988 4721 scope.go:117] "RemoveContainer" containerID="74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d" Jan 28 18:38:26 crc kubenswrapper[4721]: E0128 18:38:26.488532 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d\": container with ID starting with 74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d not found: ID does not exist" containerID="74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.488577 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d"} err="failed to get container status \"74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d\": rpc error: code = NotFound desc = could not find container \"74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d\": container with ID starting with 74a8ccdcb49c9201441154c74c4501b4322cf18e68ea4d955d27ecfd9782d76d not found: ID does not exist" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.550781 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.550931 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:26 crc kubenswrapper[4721]: I0128 18:38:26.556942 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:27 crc kubenswrapper[4721]: I0128 18:38:27.861645 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:38:29 crc kubenswrapper[4721]: I0128 18:38:29.459141 4721 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:29 crc kubenswrapper[4721]: I0128 18:38:29.477253 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:29 crc kubenswrapper[4721]: I0128 18:38:29.477280 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:29 crc kubenswrapper[4721]: I0128 18:38:29.481651 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:38:29 crc kubenswrapper[4721]: I0128 18:38:29.483741 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ef639830-5953-4e32-9da4-c06cede30417" Jan 28 18:38:29 crc kubenswrapper[4721]: E0128 18:38:29.934554 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 28 18:38:29 crc kubenswrapper[4721]: E0128 18:38:29.968749 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 28 18:38:30 crc kubenswrapper[4721]: E0128 18:38:30.091819 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 28 18:38:30 crc kubenswrapper[4721]: E0128 18:38:30.472455 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 28 18:38:30 crc kubenswrapper[4721]: I0128 18:38:30.481790 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:30 crc kubenswrapper[4721]: I0128 18:38:30.481993 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:38:34 crc kubenswrapper[4721]: I0128 18:38:34.579264 4721 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:38:34 crc kubenswrapper[4721]: I0128 18:38:34.579609 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:38:35 crc kubenswrapper[4721]: I0128 18:38:35.546764 4721 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ef639830-5953-4e32-9da4-c06cede30417" Jan 28 18:38:39 crc kubenswrapper[4721]: I0128 18:38:39.156630 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 18:38:39 crc kubenswrapper[4721]: I0128 18:38:39.942200 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 18:38:40 crc kubenswrapper[4721]: I0128 18:38:40.419900 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 18:38:40 crc kubenswrapper[4721]: I0128 18:38:40.433339 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 18:38:40 crc kubenswrapper[4721]: I0128 18:38:40.561482 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 18:38:40 crc kubenswrapper[4721]: I0128 18:38:40.587830 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 18:38:40 crc kubenswrapper[4721]: I0128 18:38:40.904719 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.044000 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.112372 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.267227 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.347879 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.572910 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.905610 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.959627 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:38:41 crc kubenswrapper[4721]: I0128 18:38:41.972036 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.005580 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.031078 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.184517 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.198015 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.257450 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.274398 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.334072 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.378932 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.431458 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.592152 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.601889 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.624584 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.651529 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.691472 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.781003 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.830265 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.860781 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 18:38:42 crc kubenswrapper[4721]: I0128 18:38:42.976219 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.042762 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.167225 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.205557 4721 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.205598 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.234212 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.247487 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.437984 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.533956 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.562857 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.564755 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.632272 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.650151 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.709423 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.736659 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.745334 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.802339 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.827839 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.910474 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.950276 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 18:38:43 crc kubenswrapper[4721]: I0128 18:38:43.997071 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.037211 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.094557 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.229742 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.279983 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.460570 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.544193 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.553259 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.579311 4721 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.579372 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.579424 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.580057 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"cc002f895e45cf45d69fe433bcd7cd682d650727637ac217b294c0d3fa29c895"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.580161 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://cc002f895e45cf45d69fe433bcd7cd682d650727637ac217b294c0d3fa29c895" gracePeriod=30 Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.608020 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.755813 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.856968 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.885312 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.988304 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:38:44 crc kubenswrapper[4721]: I0128 18:38:44.996954 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.142546 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.181755 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.189339 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.213483 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.316535 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.384039 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.413466 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.443684 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.447613 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.540182 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.541811 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.542717 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.576958 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.600551 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.671610 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.702534 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.719157 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.787915 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.800886 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.807133 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.807798 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.827463 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.828297 4721 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.876553 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.876901 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.913715 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.951782 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:38:45 crc kubenswrapper[4721]: I0128 18:38:45.980703 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.126577 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.131662 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.171489 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.191154 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.221915 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.280009 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.305632 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.351195 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.353203 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.433312 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.456387 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.526747 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.562990 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.604428 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.639137 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.683832 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.698383 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.727132 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.741644 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.777610 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 18:38:46 crc kubenswrapper[4721]: I0128 18:38:46.804991 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.107961 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.141942 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.273407 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.342545 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.354442 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.371998 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.402896 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.535664 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.599369 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.622181 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.679326 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.682293 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.822549 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.861756 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 18:38:47 crc kubenswrapper[4721]: I0128 18:38:47.956121 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.030286 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.085244 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.101811 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.162428 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.277059 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.392423 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.408349 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.538017 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.547610 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.681711 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.690123 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.731219 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.823280 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 18:38:48 crc kubenswrapper[4721]: I0128 18:38:48.967696 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.337875 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.363700 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.455870 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.477645 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.482866 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.575151 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.674061 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.794582 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.794798 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.808968 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.838259 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.882526 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.928198 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.954759 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.958630 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.963062 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 18:38:49 crc kubenswrapper[4721]: I0128 18:38:49.967416 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.392291 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.405303 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.436761 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.446113 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.446694 4721 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.575671 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.818028 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.826248 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.961558 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 18:38:50 crc kubenswrapper[4721]: I0128 18:38:50.985565 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.073313 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.272958 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.302297 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.314552 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.339561 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.412595 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.446156 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.495928 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.526256 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.560140 4721 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.595232 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.598493 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.627476 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.649770 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.943146 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 18:38:51 crc kubenswrapper[4721]: I0128 18:38:51.990659 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.035064 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.041051 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.107059 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.122715 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.160464 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.268437 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.332340 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.368022 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.374268 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.390370 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.406912 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.457963 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.592824 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.593056 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.650683 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.696697 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.729625 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.782678 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.868711 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.897407 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 18:38:52 crc kubenswrapper[4721]: I0128 18:38:52.993595 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.049276 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.281803 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.297103 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.447294 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.507194 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.536395 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.588705 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.690839 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.715809 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.719419 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.914782 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 18:38:53 crc kubenswrapper[4721]: I0128 18:38:53.928657 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.139410 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.338957 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.510618 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.519949 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.623937 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.648853 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.654324 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.669870 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.771926 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 18:38:54 crc kubenswrapper[4721]: I0128 18:38:54.897135 4721 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 18:38:55 crc kubenswrapper[4721]: I0128 18:38:55.128027 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 18:38:55 crc kubenswrapper[4721]: I0128 18:38:55.183074 4721 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 18:38:55 crc kubenswrapper[4721]: I0128 18:38:55.308053 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 18:38:56 crc kubenswrapper[4721]: I0128 18:38:56.672033 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:39:08 crc kubenswrapper[4721]: I0128 18:39:08.232803 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 18:39:08 crc kubenswrapper[4721]: I0128 18:39:08.687861 4721 generic.go:334] "Generic (PLEG): container finished" podID="12a4be20-2607-4502-b20d-b579c9987b57" containerID="87c7141690dd93f2f02e025283721b8565fe912c08eceadb291e678f52c51b2a" exitCode=0 Jan 28 18:39:08 crc kubenswrapper[4721]: I0128 18:39:08.687907 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" event={"ID":"12a4be20-2607-4502-b20d-b579c9987b57","Type":"ContainerDied","Data":"87c7141690dd93f2f02e025283721b8565fe912c08eceadb291e678f52c51b2a"} Jan 28 18:39:08 crc kubenswrapper[4721]: I0128 18:39:08.688380 4721 scope.go:117] "RemoveContainer" containerID="87c7141690dd93f2f02e025283721b8565fe912c08eceadb291e678f52c51b2a" Jan 28 18:39:09 crc kubenswrapper[4721]: I0128 18:39:09.694440 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" event={"ID":"12a4be20-2607-4502-b20d-b579c9987b57","Type":"ContainerStarted","Data":"08a1c430094123ef1e41d846835baeba3e8a7084d6596d9d1b26cb47d7764fd6"} Jan 28 18:39:09 crc kubenswrapper[4721]: I0128 18:39:09.694965 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:39:09 crc kubenswrapper[4721]: I0128 18:39:09.696317 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:39:10 crc kubenswrapper[4721]: I0128 18:39:10.264595 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 18:39:14 crc kubenswrapper[4721]: I0128 18:39:14.727095 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 18:39:14 crc kubenswrapper[4721]: I0128 18:39:14.729103 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:39:14 crc kubenswrapper[4721]: I0128 18:39:14.729196 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cc002f895e45cf45d69fe433bcd7cd682d650727637ac217b294c0d3fa29c895"} Jan 28 18:39:14 crc kubenswrapper[4721]: I0128 18:39:14.729163 4721 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cc002f895e45cf45d69fe433bcd7cd682d650727637ac217b294c0d3fa29c895" exitCode=137 Jan 28 18:39:14 crc kubenswrapper[4721]: I0128 18:39:14.729237 4721 scope.go:117] "RemoveContainer" containerID="3f712f75bf0603b358c696f1a62f2363254ab926615ab377291c7083308e3f2d" Jan 28 18:39:15 crc kubenswrapper[4721]: I0128 18:39:15.736968 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 18:39:15 crc kubenswrapper[4721]: I0128 18:39:15.738654 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"04288937f630026bdf1db386a8c8aee13015cdf7e86444981af69c41a1a0fe6b"} Jan 28 18:39:17 crc kubenswrapper[4721]: I0128 18:39:17.862305 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.151007 4721 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155401 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtc8t","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155457 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:39:22 crc kubenswrapper[4721]: E0128 18:39:22.155629 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a0a4f9-321f-4196-88ce-888b82380eb6" containerName="oauth-openshift" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155651 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a0a4f9-321f-4196-88ce-888b82380eb6" containerName="oauth-openshift" Jan 28 18:39:22 crc kubenswrapper[4721]: E0128 18:39:22.155671 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" containerName="installer" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155681 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" containerName="installer" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155793 4721 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155815 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="94f18835-9a2d-4427-bc71-e4cd48b94c19" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155801 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bab193a-eb38-435d-8a0e-c3199e0abc80" containerName="installer" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.155967 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a0a4f9-321f-4196-88ce-888b82380eb6" containerName="oauth-openshift" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.156409 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.159902 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.159978 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160279 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160370 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160471 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160527 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160680 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160738 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.160371 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.161038 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.161751 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.162138 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.170149 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.183868 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.185495 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.189612 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.195833 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=53.195817736 podStartE2EDuration="53.195817736s" podCreationTimestamp="2026-01-28 18:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:22.19313765 +0000 UTC m=+327.918443210" watchObservedRunningTime="2026-01-28 18:39:22.195817736 +0000 UTC m=+327.921123296" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.314783 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315096 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315251 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315351 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315439 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqgc\" (UniqueName: \"kubernetes.io/projected/a924d471-72bb-4dc5-b234-cc5679983d55-kube-api-access-rqqgc\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315535 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315652 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315764 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315859 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a924d471-72bb-4dc5-b234-cc5679983d55-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.315938 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.316033 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.316139 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.316285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.316393 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418055 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418419 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418445 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418470 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418492 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418508 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418524 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqgc\" (UniqueName: \"kubernetes.io/projected/a924d471-72bb-4dc5-b234-cc5679983d55-kube-api-access-rqqgc\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418561 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418592 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418613 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a924d471-72bb-4dc5-b234-cc5679983d55-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418667 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418684 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.418826 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.419369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.419457 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a924d471-72bb-4dc5-b234-cc5679983d55-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.419852 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.419966 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.420099 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a924d471-72bb-4dc5-b234-cc5679983d55-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.424142 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.424461 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.424490 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.424679 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.424920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.426233 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.428673 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.431757 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a924d471-72bb-4dc5-b234-cc5679983d55-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.481940 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqgc\" (UniqueName: \"kubernetes.io/projected/a924d471-72bb-4dc5-b234-cc5679983d55-kube-api-access-rqqgc\") pod \"oauth-openshift-9fbfc7dc4-nxq75\" (UID: \"a924d471-72bb-4dc5-b234-cc5679983d55\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.543116 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.776217 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:22 crc kubenswrapper[4721]: I0128 18:39:22.963451 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75"] Jan 28 18:39:22 crc kubenswrapper[4721]: W0128 18:39:22.970909 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda924d471_72bb_4dc5_b234_cc5679983d55.slice/crio-8fa4eb9d68ad3d8235801606ab58fcb1265bd0b92757e8b56f3c70d6ceb704f8 WatchSource:0}: Error finding container 8fa4eb9d68ad3d8235801606ab58fcb1265bd0b92757e8b56f3c70d6ceb704f8: Status 404 returned error can't find the container with id 8fa4eb9d68ad3d8235801606ab58fcb1265bd0b92757e8b56f3c70d6ceb704f8 Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.536593 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a0a4f9-321f-4196-88ce-888b82380eb6" path="/var/lib/kubelet/pods/26a0a4f9-321f-4196-88ce-888b82380eb6/volumes" Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.930133 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" event={"ID":"a924d471-72bb-4dc5-b234-cc5679983d55","Type":"ContainerStarted","Data":"15668ec3bcab0a73251f0dcf359ab58bf8ff6e4ebb11f55115e0b96b0d63fa01"} Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.930203 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" event={"ID":"a924d471-72bb-4dc5-b234-cc5679983d55","Type":"ContainerStarted","Data":"8fa4eb9d68ad3d8235801606ab58fcb1265bd0b92757e8b56f3c70d6ceb704f8"} Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.930403 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.935089 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.945458 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=1.945441711 podStartE2EDuration="1.945441711s" podCreationTimestamp="2026-01-28 18:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:23.945081069 +0000 UTC m=+329.670386629" watchObservedRunningTime="2026-01-28 18:39:23.945441711 +0000 UTC m=+329.670747271" Jan 28 18:39:23 crc kubenswrapper[4721]: I0128 18:39:23.963094 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-nxq75" podStartSLOduration=83.963072186 podStartE2EDuration="1m23.963072186s" podCreationTimestamp="2026-01-28 18:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:23.962571609 +0000 UTC m=+329.687877169" watchObservedRunningTime="2026-01-28 18:39:23.963072186 +0000 UTC m=+329.688377746" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.136828 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktm7m"] Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.137126 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ktm7m" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="registry-server" containerID="cri-o://a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca" gracePeriod=2 Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.558126 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.579721 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.583092 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.645216 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw86b\" (UniqueName: \"kubernetes.io/projected/d093e4ed-b49f-4abb-9cab-67d8072aea98-kube-api-access-dw86b\") pod \"d093e4ed-b49f-4abb-9cab-67d8072aea98\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.645572 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-catalog-content\") pod \"d093e4ed-b49f-4abb-9cab-67d8072aea98\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.645659 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-utilities\") pod \"d093e4ed-b49f-4abb-9cab-67d8072aea98\" (UID: \"d093e4ed-b49f-4abb-9cab-67d8072aea98\") " Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.646272 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-utilities" (OuterVolumeSpecName: "utilities") pod "d093e4ed-b49f-4abb-9cab-67d8072aea98" (UID: "d093e4ed-b49f-4abb-9cab-67d8072aea98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.651912 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d093e4ed-b49f-4abb-9cab-67d8072aea98-kube-api-access-dw86b" (OuterVolumeSpecName: "kube-api-access-dw86b") pod "d093e4ed-b49f-4abb-9cab-67d8072aea98" (UID: "d093e4ed-b49f-4abb-9cab-67d8072aea98"). InnerVolumeSpecName "kube-api-access-dw86b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.691554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d093e4ed-b49f-4abb-9cab-67d8072aea98" (UID: "d093e4ed-b49f-4abb-9cab-67d8072aea98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.747491 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw86b\" (UniqueName: \"kubernetes.io/projected/d093e4ed-b49f-4abb-9cab-67d8072aea98-kube-api-access-dw86b\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.747518 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.747528 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d093e4ed-b49f-4abb-9cab-67d8072aea98-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.940757 4721 generic.go:334] "Generic (PLEG): container finished" podID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerID="a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca" exitCode=0 Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.940841 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerDied","Data":"a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca"} Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.940903 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktm7m" event={"ID":"d093e4ed-b49f-4abb-9cab-67d8072aea98","Type":"ContainerDied","Data":"d15c110d736d7c554b8835f215f55a3d17e2a585c85959fcbd2be0da6f8ad4b0"} Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.940898 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktm7m" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.940953 4721 scope.go:117] "RemoveContainer" containerID="a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.945212 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.957086 4721 scope.go:117] "RemoveContainer" containerID="1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.976222 4721 scope.go:117] "RemoveContainer" containerID="7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0" Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.981263 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktm7m"] Jan 28 18:39:24 crc kubenswrapper[4721]: I0128 18:39:24.983247 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ktm7m"] Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.008701 4721 scope.go:117] "RemoveContainer" containerID="a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca" Jan 28 18:39:25 crc kubenswrapper[4721]: E0128 18:39:25.009137 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca\": container with ID starting with a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca not found: ID does not exist" containerID="a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.009193 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca"} err="failed to get container status \"a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca\": rpc error: code = NotFound desc = could not find container \"a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca\": container with ID starting with a2dc7fd9af99eda303fd6ee0e72336c97ec71e21abbffe20cb06bc78beb50dca not found: ID does not exist" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.009220 4721 scope.go:117] "RemoveContainer" containerID="1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806" Jan 28 18:39:25 crc kubenswrapper[4721]: E0128 18:39:25.009527 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806\": container with ID starting with 1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806 not found: ID does not exist" containerID="1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.009554 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806"} err="failed to get container status \"1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806\": rpc error: code = NotFound desc = could not find container \"1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806\": container with ID starting with 1b8faab341a50286ffc43ecd78aa35068eb08e18b993ec22b7a447b0d5d71806 not found: ID does not exist" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.009573 4721 scope.go:117] "RemoveContainer" containerID="7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0" Jan 28 18:39:25 crc kubenswrapper[4721]: E0128 18:39:25.009828 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0\": container with ID starting with 7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0 not found: ID does not exist" containerID="7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.009859 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0"} err="failed to get container status \"7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0\": rpc error: code = NotFound desc = could not find container \"7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0\": container with ID starting with 7c29e5de8642a087902608baf1b1982f8df96f8583fc280f2e626f81ee441ca0 not found: ID does not exist" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.535085 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" path="/var/lib/kubelet/pods/d093e4ed-b49f-4abb-9cab-67d8072aea98/volumes" Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.890499 4721 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:39:25 crc kubenswrapper[4721]: I0128 18:39:25.890751 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2d74681de970c7e709fa9d17c39e5cf88f883f6fa6c7b831d6db8760e700649f" gracePeriod=5 Jan 28 18:39:26 crc kubenswrapper[4721]: I0128 18:39:26.737341 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx6vw"] Jan 28 18:39:26 crc kubenswrapper[4721]: I0128 18:39:26.737613 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nx6vw" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="registry-server" containerID="cri-o://2612160059c4e82d7ea37c966930f8e2f5dc64664f4182c5ee54a9707750349d" gracePeriod=2 Jan 28 18:39:26 crc kubenswrapper[4721]: I0128 18:39:26.953808 4721 generic.go:334] "Generic (PLEG): container finished" podID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerID="2612160059c4e82d7ea37c966930f8e2f5dc64664f4182c5ee54a9707750349d" exitCode=0 Jan 28 18:39:26 crc kubenswrapper[4721]: I0128 18:39:26.953878 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerDied","Data":"2612160059c4e82d7ea37c966930f8e2f5dc64664f4182c5ee54a9707750349d"} Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.087632 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.177352 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-catalog-content\") pod \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.177408 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghb65\" (UniqueName: \"kubernetes.io/projected/7ac4d9d7-c104-455a-b162-75b3bbf2a879-kube-api-access-ghb65\") pod \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.177437 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-utilities\") pod \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\" (UID: \"7ac4d9d7-c104-455a-b162-75b3bbf2a879\") " Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.178571 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-utilities" (OuterVolumeSpecName: "utilities") pod "7ac4d9d7-c104-455a-b162-75b3bbf2a879" (UID: "7ac4d9d7-c104-455a-b162-75b3bbf2a879"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.178865 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.182046 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac4d9d7-c104-455a-b162-75b3bbf2a879-kube-api-access-ghb65" (OuterVolumeSpecName: "kube-api-access-ghb65") pod "7ac4d9d7-c104-455a-b162-75b3bbf2a879" (UID: "7ac4d9d7-c104-455a-b162-75b3bbf2a879"). InnerVolumeSpecName "kube-api-access-ghb65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.280669 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghb65\" (UniqueName: \"kubernetes.io/projected/7ac4d9d7-c104-455a-b162-75b3bbf2a879-kube-api-access-ghb65\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.302147 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ac4d9d7-c104-455a-b162-75b3bbf2a879" (UID: "7ac4d9d7-c104-455a-b162-75b3bbf2a879"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.381763 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac4d9d7-c104-455a-b162-75b3bbf2a879-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.961565 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6vw" event={"ID":"7ac4d9d7-c104-455a-b162-75b3bbf2a879","Type":"ContainerDied","Data":"0be87c44f1e74be508ed9039f19911c12808dc6414e2af6b6b99d43e0068057d"} Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.961936 4721 scope.go:117] "RemoveContainer" containerID="2612160059c4e82d7ea37c966930f8e2f5dc64664f4182c5ee54a9707750349d" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.961654 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6vw" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.985958 4721 scope.go:117] "RemoveContainer" containerID="d24051cf5d6bd62982863fc9b7a15142560e14c6ed128af69bc8f9bbf7279dba" Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.992219 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx6vw"] Jan 28 18:39:27 crc kubenswrapper[4721]: I0128 18:39:27.996629 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nx6vw"] Jan 28 18:39:28 crc kubenswrapper[4721]: I0128 18:39:28.001788 4721 scope.go:117] "RemoveContainer" containerID="e8b3caca6e984df9986a3e5e71f37acefc7d817be54ae2a9bb6331d18260198b" Jan 28 18:39:29 crc kubenswrapper[4721]: I0128 18:39:29.535694 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" path="/var/lib/kubelet/pods/7ac4d9d7-c104-455a-b162-75b3bbf2a879/volumes" Jan 28 18:39:30 crc kubenswrapper[4721]: I0128 18:39:30.978765 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:39:30 crc kubenswrapper[4721]: I0128 18:39:30.979153 4721 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2d74681de970c7e709fa9d17c39e5cf88f883f6fa6c7b831d6db8760e700649f" exitCode=137 Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.226349 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.226487 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.458210 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.458273 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.500951 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.535151 4721 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547122 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547252 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547379 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547380 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547424 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547401 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547438 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547465 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547492 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547838 4721 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547857 4721 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547867 4721 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.547878 4721 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.551295 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.551334 4721 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="29fbb4df-61fa-4aec-b56d-8ca9f61332c2" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.554564 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.554639 4721 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="29fbb4df-61fa-4aec-b56d-8ca9f61332c2" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.558401 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.649648 4721 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.986964 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.987032 4721 scope.go:117] "RemoveContainer" containerID="2d74681de970c7e709fa9d17c39e5cf88f883f6fa6c7b831d6db8760e700649f" Jan 28 18:39:31 crc kubenswrapper[4721]: I0128 18:39:31.987156 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:39:33 crc kubenswrapper[4721]: I0128 18:39:33.536546 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 18:39:39 crc kubenswrapper[4721]: I0128 18:39:39.652790 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8"] Jan 28 18:39:39 crc kubenswrapper[4721]: I0128 18:39:39.654017 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" podUID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" containerName="route-controller-manager" containerID="cri-o://c1a0dc6e5b5a7283b3189a83a4d6ce388eeef0edc349858942b194400384cfd4" gracePeriod=30 Jan 28 18:39:39 crc kubenswrapper[4721]: I0128 18:39:39.688810 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c9dk6"] Jan 28 18:39:39 crc kubenswrapper[4721]: I0128 18:39:39.689059 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" podUID="13b4ddde-7262-4219-8aac-fb34883b9608" containerName="controller-manager" containerID="cri-o://00a75734892b0f995f4ecca4e1c2197943c9c19a58ad893ce00a141221eb8b75" gracePeriod=30 Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.031869 4721 generic.go:334] "Generic (PLEG): container finished" podID="13b4ddde-7262-4219-8aac-fb34883b9608" containerID="00a75734892b0f995f4ecca4e1c2197943c9c19a58ad893ce00a141221eb8b75" exitCode=0 Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.032220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" event={"ID":"13b4ddde-7262-4219-8aac-fb34883b9608","Type":"ContainerDied","Data":"00a75734892b0f995f4ecca4e1c2197943c9c19a58ad893ce00a141221eb8b75"} Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.034329 4721 generic.go:334] "Generic (PLEG): container finished" podID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" containerID="c1a0dc6e5b5a7283b3189a83a4d6ce388eeef0edc349858942b194400384cfd4" exitCode=0 Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.034357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" event={"ID":"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1","Type":"ContainerDied","Data":"c1a0dc6e5b5a7283b3189a83a4d6ce388eeef0edc349858942b194400384cfd4"} Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.128361 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.164903 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.259643 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-serving-cert\") pod \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.259748 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-config\") pod \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.259804 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-client-ca\") pod \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.259868 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc28j\" (UniqueName: \"kubernetes.io/projected/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-kube-api-access-fc28j\") pod \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\" (UID: \"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.260627 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" (UID: "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.260721 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-config" (OuterVolumeSpecName: "config") pod "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" (UID: "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.265337 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-kube-api-access-fc28j" (OuterVolumeSpecName: "kube-api-access-fc28j") pod "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" (UID: "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1"). InnerVolumeSpecName "kube-api-access-fc28j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.265360 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" (UID: "a8ac5f19-3f57-4e3a-8f53-dc493fcceea1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.360864 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49cdl\" (UniqueName: \"kubernetes.io/projected/13b4ddde-7262-4219-8aac-fb34883b9608-kube-api-access-49cdl\") pod \"13b4ddde-7262-4219-8aac-fb34883b9608\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.360902 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-config\") pod \"13b4ddde-7262-4219-8aac-fb34883b9608\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.360996 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-client-ca\") pod \"13b4ddde-7262-4219-8aac-fb34883b9608\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.361013 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-proxy-ca-bundles\") pod \"13b4ddde-7262-4219-8aac-fb34883b9608\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.361040 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b4ddde-7262-4219-8aac-fb34883b9608-serving-cert\") pod \"13b4ddde-7262-4219-8aac-fb34883b9608\" (UID: \"13b4ddde-7262-4219-8aac-fb34883b9608\") " Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.361253 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.361266 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc28j\" (UniqueName: \"kubernetes.io/projected/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-kube-api-access-fc28j\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.361276 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.361285 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.362226 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13b4ddde-7262-4219-8aac-fb34883b9608" (UID: "13b4ddde-7262-4219-8aac-fb34883b9608"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.362281 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-client-ca" (OuterVolumeSpecName: "client-ca") pod "13b4ddde-7262-4219-8aac-fb34883b9608" (UID: "13b4ddde-7262-4219-8aac-fb34883b9608"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.362461 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-config" (OuterVolumeSpecName: "config") pod "13b4ddde-7262-4219-8aac-fb34883b9608" (UID: "13b4ddde-7262-4219-8aac-fb34883b9608"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.364943 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b4ddde-7262-4219-8aac-fb34883b9608-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13b4ddde-7262-4219-8aac-fb34883b9608" (UID: "13b4ddde-7262-4219-8aac-fb34883b9608"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.365146 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b4ddde-7262-4219-8aac-fb34883b9608-kube-api-access-49cdl" (OuterVolumeSpecName: "kube-api-access-49cdl") pod "13b4ddde-7262-4219-8aac-fb34883b9608" (UID: "13b4ddde-7262-4219-8aac-fb34883b9608"). InnerVolumeSpecName "kube-api-access-49cdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.462054 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49cdl\" (UniqueName: \"kubernetes.io/projected/13b4ddde-7262-4219-8aac-fb34883b9608-kube-api-access-49cdl\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.462084 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.462093 4721 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.462102 4721 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13b4ddde-7262-4219-8aac-fb34883b9608-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:40 crc kubenswrapper[4721]: I0128 18:39:40.462110 4721 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13b4ddde-7262-4219-8aac-fb34883b9608-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.041007 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.041019 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c9dk6" event={"ID":"13b4ddde-7262-4219-8aac-fb34883b9608","Type":"ContainerDied","Data":"dfad2cedaf51c86061bb343f8931c6d0ac0f135b0212ccec479018e53202c572"} Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.041065 4721 scope.go:117] "RemoveContainer" containerID="00a75734892b0f995f4ecca4e1c2197943c9c19a58ad893ce00a141221eb8b75" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.043595 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" event={"ID":"a8ac5f19-3f57-4e3a-8f53-dc493fcceea1","Type":"ContainerDied","Data":"236d9b13a4c4bd07c6ea135f049bb2bd9433a0e6f3dc793c5af396e081007866"} Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.043774 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.058808 4721 scope.go:117] "RemoveContainer" containerID="c1a0dc6e5b5a7283b3189a83a4d6ce388eeef0edc349858942b194400384cfd4" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.073219 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c9dk6"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.077245 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c9dk6"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.092275 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.096981 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6n8x8"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.281524 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2"] Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282130 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="registry-server" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282155 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="registry-server" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282181 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="extract-content" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282191 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="extract-content" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282204 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="extract-content" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282214 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="extract-content" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282227 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="extract-utilities" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282236 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="extract-utilities" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282249 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282257 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282268 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" containerName="route-controller-manager" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282275 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" containerName="route-controller-manager" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282289 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="extract-utilities" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282299 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="extract-utilities" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282316 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b4ddde-7262-4219-8aac-fb34883b9608" containerName="controller-manager" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282324 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b4ddde-7262-4219-8aac-fb34883b9608" containerName="controller-manager" Jan 28 18:39:41 crc kubenswrapper[4721]: E0128 18:39:41.282341 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="registry-server" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282350 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="registry-server" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282469 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d093e4ed-b49f-4abb-9cab-67d8072aea98" containerName="registry-server" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282482 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282495 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" containerName="route-controller-manager" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282512 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac4d9d7-c104-455a-b162-75b3bbf2a879" containerName="registry-server" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.282522 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b4ddde-7262-4219-8aac-fb34883b9608" containerName="controller-manager" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.283001 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.284768 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.285017 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.285095 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.285265 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.286122 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c8d4f465c-fd9mv"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.286889 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.287864 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.288096 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.294421 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.295118 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.295304 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.295400 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.295544 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.296831 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.297456 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.301451 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c8d4f465c-fd9mv"] Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.310301 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.375867 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncsd\" (UniqueName: \"kubernetes.io/projected/0e16417b-f61d-44fc-9761-d670c2bde3d1-kube-api-access-hncsd\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.375929 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-client-ca\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.375955 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e16417b-f61d-44fc-9761-d670c2bde3d1-client-ca\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.376017 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5c7f7b-340f-46a3-b959-d5627d7cd517-serving-cert\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.376061 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgkvx\" (UniqueName: \"kubernetes.io/projected/6f5c7f7b-340f-46a3-b959-d5627d7cd517-kube-api-access-cgkvx\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.376106 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-config\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.376134 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e16417b-f61d-44fc-9761-d670c2bde3d1-config\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.376160 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e16417b-f61d-44fc-9761-d670c2bde3d1-serving-cert\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.376202 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-proxy-ca-bundles\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.476967 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-config\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e16417b-f61d-44fc-9761-d670c2bde3d1-config\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477050 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e16417b-f61d-44fc-9761-d670c2bde3d1-serving-cert\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477071 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-proxy-ca-bundles\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477096 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hncsd\" (UniqueName: \"kubernetes.io/projected/0e16417b-f61d-44fc-9761-d670c2bde3d1-kube-api-access-hncsd\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477128 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-client-ca\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477148 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e16417b-f61d-44fc-9761-d670c2bde3d1-client-ca\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477242 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5c7f7b-340f-46a3-b959-d5627d7cd517-serving-cert\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.477273 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgkvx\" (UniqueName: \"kubernetes.io/projected/6f5c7f7b-340f-46a3-b959-d5627d7cd517-kube-api-access-cgkvx\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.479204 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-config\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.483192 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e16417b-f61d-44fc-9761-d670c2bde3d1-config\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.484452 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-proxy-ca-bundles\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.485057 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5c7f7b-340f-46a3-b959-d5627d7cd517-serving-cert\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.485060 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f5c7f7b-340f-46a3-b959-d5627d7cd517-client-ca\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.485997 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e16417b-f61d-44fc-9761-d670c2bde3d1-client-ca\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.489350 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e16417b-f61d-44fc-9761-d670c2bde3d1-serving-cert\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.508783 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgkvx\" (UniqueName: \"kubernetes.io/projected/6f5c7f7b-340f-46a3-b959-d5627d7cd517-kube-api-access-cgkvx\") pod \"controller-manager-c8d4f465c-fd9mv\" (UID: \"6f5c7f7b-340f-46a3-b959-d5627d7cd517\") " pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.509994 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncsd\" (UniqueName: \"kubernetes.io/projected/0e16417b-f61d-44fc-9761-d670c2bde3d1-kube-api-access-hncsd\") pod \"route-controller-manager-5487bbd6d7-n2bq2\" (UID: \"0e16417b-f61d-44fc-9761-d670c2bde3d1\") " pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.537253 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b4ddde-7262-4219-8aac-fb34883b9608" path="/var/lib/kubelet/pods/13b4ddde-7262-4219-8aac-fb34883b9608/volumes" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.538388 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ac5f19-3f57-4e3a-8f53-dc493fcceea1" path="/var/lib/kubelet/pods/a8ac5f19-3f57-4e3a-8f53-dc493fcceea1/volumes" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.599153 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.609358 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:41 crc kubenswrapper[4721]: I0128 18:39:41.796575 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c8d4f465c-fd9mv"] Jan 28 18:39:42 crc kubenswrapper[4721]: I0128 18:39:42.040247 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2"] Jan 28 18:39:42 crc kubenswrapper[4721]: W0128 18:39:42.047586 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e16417b_f61d_44fc_9761_d670c2bde3d1.slice/crio-1227dd311e0d8f20ed280b814ec523c99379d7c2d14ba5c7ba8173ee21b3745c WatchSource:0}: Error finding container 1227dd311e0d8f20ed280b814ec523c99379d7c2d14ba5c7ba8173ee21b3745c: Status 404 returned error can't find the container with id 1227dd311e0d8f20ed280b814ec523c99379d7c2d14ba5c7ba8173ee21b3745c Jan 28 18:39:42 crc kubenswrapper[4721]: I0128 18:39:42.058302 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" event={"ID":"6f5c7f7b-340f-46a3-b959-d5627d7cd517","Type":"ContainerStarted","Data":"b618a11cda92ba2b33b8c00e87d0a47d6366a5e2496fa35238725668107226eb"} Jan 28 18:39:42 crc kubenswrapper[4721]: I0128 18:39:42.058364 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" event={"ID":"6f5c7f7b-340f-46a3-b959-d5627d7cd517","Type":"ContainerStarted","Data":"92e36260230223f50589e31550dd71dc80ba477007d298fbdccbece68c04d420"} Jan 28 18:39:42 crc kubenswrapper[4721]: I0128 18:39:42.059793 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:42 crc kubenswrapper[4721]: I0128 18:39:42.072306 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" Jan 28 18:39:42 crc kubenswrapper[4721]: I0128 18:39:42.097575 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c8d4f465c-fd9mv" podStartSLOduration=3.097550849 podStartE2EDuration="3.097550849s" podCreationTimestamp="2026-01-28 18:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:42.085061659 +0000 UTC m=+347.810367229" watchObservedRunningTime="2026-01-28 18:39:42.097550849 +0000 UTC m=+347.822856409" Jan 28 18:39:43 crc kubenswrapper[4721]: I0128 18:39:43.070281 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" event={"ID":"0e16417b-f61d-44fc-9761-d670c2bde3d1","Type":"ContainerStarted","Data":"6936a4d5766344c2dddd2eac0deb59ec503bc31c88969dc531a4de9f4522cb46"} Jan 28 18:39:43 crc kubenswrapper[4721]: I0128 18:39:43.070559 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" event={"ID":"0e16417b-f61d-44fc-9761-d670c2bde3d1","Type":"ContainerStarted","Data":"1227dd311e0d8f20ed280b814ec523c99379d7c2d14ba5c7ba8173ee21b3745c"} Jan 28 18:39:43 crc kubenswrapper[4721]: I0128 18:39:43.087296 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" podStartSLOduration=4.087273045 podStartE2EDuration="4.087273045s" podCreationTimestamp="2026-01-28 18:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:43.083937398 +0000 UTC m=+348.809242968" watchObservedRunningTime="2026-01-28 18:39:43.087273045 +0000 UTC m=+348.812578605" Jan 28 18:39:44 crc kubenswrapper[4721]: I0128 18:39:44.074389 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:44 crc kubenswrapper[4721]: I0128 18:39:44.078523 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5487bbd6d7-n2bq2" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.570069 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6k9rr"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.571341 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6k9rr" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="registry-server" containerID="cri-o://1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee" gracePeriod=30 Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.576262 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9khl"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.577642 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9khl" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="registry-server" containerID="cri-o://66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8" gracePeriod=30 Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.591901 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qp2vg"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.592280 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" containerID="cri-o://08a1c430094123ef1e41d846835baeba3e8a7084d6596d9d1b26cb47d7764fd6" gracePeriod=30 Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.613192 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cql6x"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.613466 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cql6x" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="registry-server" containerID="cri-o://b1c084b5ea1ca4855f89bf6c6c16d4e8214f0fa646e47650dfc5757bb8d21fc0" gracePeriod=30 Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.621500 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk9tw"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.622411 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.626474 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk9tw"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.631480 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4q59"] Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.631777 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4q59" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="registry-server" containerID="cri-o://2080634d0a8cd61ebf7da4bc9efe14cbada2488bf22a9018492716f920e1dad5" gracePeriod=30 Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.699262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nclh\" (UniqueName: \"kubernetes.io/projected/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-kube-api-access-7nclh\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.699348 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.699375 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.800278 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nclh\" (UniqueName: \"kubernetes.io/projected/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-kube-api-access-7nclh\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.800355 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.800385 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.802475 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.806852 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:58 crc kubenswrapper[4721]: I0128 18:39:58.817927 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nclh\" (UniqueName: \"kubernetes.io/projected/c24ece18-1c22-49c3-ae82-e63bdc44ab1f-kube-api-access-7nclh\") pod \"marketplace-operator-79b997595-dk9tw\" (UID: \"c24ece18-1c22-49c3-ae82-e63bdc44ab1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.054479 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.107943 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.157407 4721 generic.go:334] "Generic (PLEG): container finished" podID="12a4be20-2607-4502-b20d-b579c9987b57" containerID="08a1c430094123ef1e41d846835baeba3e8a7084d6596d9d1b26cb47d7764fd6" exitCode=0 Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.157483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" event={"ID":"12a4be20-2607-4502-b20d-b579c9987b57","Type":"ContainerDied","Data":"08a1c430094123ef1e41d846835baeba3e8a7084d6596d9d1b26cb47d7764fd6"} Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.157521 4721 scope.go:117] "RemoveContainer" containerID="87c7141690dd93f2f02e025283721b8565fe912c08eceadb291e678f52c51b2a" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.170700 4721 generic.go:334] "Generic (PLEG): container finished" podID="0d6d2129-7840-4dc5-941b-541507dfd482" containerID="2080634d0a8cd61ebf7da4bc9efe14cbada2488bf22a9018492716f920e1dad5" exitCode=0 Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.170801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerDied","Data":"2080634d0a8cd61ebf7da4bc9efe14cbada2488bf22a9018492716f920e1dad5"} Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.173076 4721 generic.go:334] "Generic (PLEG): container finished" podID="36456b90-3e11-4480-b235-5909103844ba" containerID="b1c084b5ea1ca4855f89bf6c6c16d4e8214f0fa646e47650dfc5757bb8d21fc0" exitCode=0 Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.173118 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerDied","Data":"b1c084b5ea1ca4855f89bf6c6c16d4e8214f0fa646e47650dfc5757bb8d21fc0"} Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.174925 4721 generic.go:334] "Generic (PLEG): container finished" podID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerID="66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8" exitCode=0 Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.174965 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerDied","Data":"66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8"} Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.174985 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9khl" event={"ID":"384e21cc-b8a7-4a62-b817-d985bde07d66","Type":"ContainerDied","Data":"a46eb6affac7cac086b159218e6230201fd728ca8f70ccc6fc00dad3fe8b7832"} Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.175055 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9khl" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.183717 4721 generic.go:334] "Generic (PLEG): container finished" podID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerID="1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee" exitCode=0 Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.183773 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerDied","Data":"1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee"} Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.202730 4721 scope.go:117] "RemoveContainer" containerID="66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.204761 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-catalog-content\") pod \"384e21cc-b8a7-4a62-b817-d985bde07d66\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.204882 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnzk5\" (UniqueName: \"kubernetes.io/projected/384e21cc-b8a7-4a62-b817-d985bde07d66-kube-api-access-fnzk5\") pod \"384e21cc-b8a7-4a62-b817-d985bde07d66\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.204935 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-utilities\") pod \"384e21cc-b8a7-4a62-b817-d985bde07d66\" (UID: \"384e21cc-b8a7-4a62-b817-d985bde07d66\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.206742 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-utilities" (OuterVolumeSpecName: "utilities") pod "384e21cc-b8a7-4a62-b817-d985bde07d66" (UID: "384e21cc-b8a7-4a62-b817-d985bde07d66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.212109 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384e21cc-b8a7-4a62-b817-d985bde07d66-kube-api-access-fnzk5" (OuterVolumeSpecName: "kube-api-access-fnzk5") pod "384e21cc-b8a7-4a62-b817-d985bde07d66" (UID: "384e21cc-b8a7-4a62-b817-d985bde07d66"). InnerVolumeSpecName "kube-api-access-fnzk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.232347 4721 scope.go:117] "RemoveContainer" containerID="d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5" Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.253429 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee is running failed: container process not found" containerID="1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.255503 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee is running failed: container process not found" containerID="1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.255967 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee is running failed: container process not found" containerID="1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.256004 4721 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6k9rr" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="registry-server" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.279286 4721 scope.go:117] "RemoveContainer" containerID="ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.282983 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "384e21cc-b8a7-4a62-b817-d985bde07d66" (UID: "384e21cc-b8a7-4a62-b817-d985bde07d66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.307086 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.307109 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/384e21cc-b8a7-4a62-b817-d985bde07d66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.307120 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnzk5\" (UniqueName: \"kubernetes.io/projected/384e21cc-b8a7-4a62-b817-d985bde07d66-kube-api-access-fnzk5\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.308497 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.310420 4721 scope.go:117] "RemoveContainer" containerID="66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8" Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.311272 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8\": container with ID starting with 66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8 not found: ID does not exist" containerID="66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.311346 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8"} err="failed to get container status \"66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8\": rpc error: code = NotFound desc = could not find container \"66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8\": container with ID starting with 66ba89a46c578985bb84484866316c8770c3c3271b0854aa7d679a816d32eea8 not found: ID does not exist" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.311402 4721 scope.go:117] "RemoveContainer" containerID="d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5" Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.315004 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5\": container with ID starting with d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5 not found: ID does not exist" containerID="d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.315038 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5"} err="failed to get container status \"d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5\": rpc error: code = NotFound desc = could not find container \"d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5\": container with ID starting with d0fa5da46bab78be9c5e6a72ab825e5975154d6f7e47f2846f271acd649cabd5 not found: ID does not exist" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.315059 4721 scope.go:117] "RemoveContainer" containerID="ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d" Jan 28 18:39:59 crc kubenswrapper[4721]: E0128 18:39:59.315392 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d\": container with ID starting with ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d not found: ID does not exist" containerID="ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.315439 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d"} err="failed to get container status \"ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d\": rpc error: code = NotFound desc = could not find container \"ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d\": container with ID starting with ef13e6465f8ab87fa5b7eea95f62cd51bb557f9f662b65b7376fd62e7d10fa5d not found: ID does not exist" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.316061 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.332559 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.339728 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407580 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-trusted-ca\") pod \"12a4be20-2607-4502-b20d-b579c9987b57\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407640 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-catalog-content\") pod \"0d6d2129-7840-4dc5-941b-541507dfd482\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407682 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcl56\" (UniqueName: \"kubernetes.io/projected/0d6d2129-7840-4dc5-941b-541507dfd482-kube-api-access-dcl56\") pod \"0d6d2129-7840-4dc5-941b-541507dfd482\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407714 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-operator-metrics\") pod \"12a4be20-2607-4502-b20d-b579c9987b57\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407760 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-catalog-content\") pod \"36456b90-3e11-4480-b235-5909103844ba\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407783 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxjs9\" (UniqueName: \"kubernetes.io/projected/e1764268-02a2-46af-a94d-b9f32dabcab8-kube-api-access-rxjs9\") pod \"e1764268-02a2-46af-a94d-b9f32dabcab8\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407799 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-utilities\") pod \"0d6d2129-7840-4dc5-941b-541507dfd482\" (UID: \"0d6d2129-7840-4dc5-941b-541507dfd482\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407827 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x5w6\" (UniqueName: \"kubernetes.io/projected/12a4be20-2607-4502-b20d-b579c9987b57-kube-api-access-9x5w6\") pod \"12a4be20-2607-4502-b20d-b579c9987b57\" (UID: \"12a4be20-2607-4502-b20d-b579c9987b57\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407847 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n8s9\" (UniqueName: \"kubernetes.io/projected/36456b90-3e11-4480-b235-5909103844ba-kube-api-access-4n8s9\") pod \"36456b90-3e11-4480-b235-5909103844ba\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407865 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-utilities\") pod \"e1764268-02a2-46af-a94d-b9f32dabcab8\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407881 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-catalog-content\") pod \"e1764268-02a2-46af-a94d-b9f32dabcab8\" (UID: \"e1764268-02a2-46af-a94d-b9f32dabcab8\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.407908 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-utilities\") pod \"36456b90-3e11-4480-b235-5909103844ba\" (UID: \"36456b90-3e11-4480-b235-5909103844ba\") " Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.409074 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-utilities" (OuterVolumeSpecName: "utilities") pod "36456b90-3e11-4480-b235-5909103844ba" (UID: "36456b90-3e11-4480-b235-5909103844ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.409966 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-utilities" (OuterVolumeSpecName: "utilities") pod "e1764268-02a2-46af-a94d-b9f32dabcab8" (UID: "e1764268-02a2-46af-a94d-b9f32dabcab8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.410383 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-utilities" (OuterVolumeSpecName: "utilities") pod "0d6d2129-7840-4dc5-941b-541507dfd482" (UID: "0d6d2129-7840-4dc5-941b-541507dfd482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.412636 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "12a4be20-2607-4502-b20d-b579c9987b57" (UID: "12a4be20-2607-4502-b20d-b579c9987b57"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.413640 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36456b90-3e11-4480-b235-5909103844ba-kube-api-access-4n8s9" (OuterVolumeSpecName: "kube-api-access-4n8s9") pod "36456b90-3e11-4480-b235-5909103844ba" (UID: "36456b90-3e11-4480-b235-5909103844ba"). InnerVolumeSpecName "kube-api-access-4n8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.414637 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "12a4be20-2607-4502-b20d-b579c9987b57" (UID: "12a4be20-2607-4502-b20d-b579c9987b57"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.419902 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1764268-02a2-46af-a94d-b9f32dabcab8-kube-api-access-rxjs9" (OuterVolumeSpecName: "kube-api-access-rxjs9") pod "e1764268-02a2-46af-a94d-b9f32dabcab8" (UID: "e1764268-02a2-46af-a94d-b9f32dabcab8"). InnerVolumeSpecName "kube-api-access-rxjs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.430635 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a4be20-2607-4502-b20d-b579c9987b57-kube-api-access-9x5w6" (OuterVolumeSpecName: "kube-api-access-9x5w6") pod "12a4be20-2607-4502-b20d-b579c9987b57" (UID: "12a4be20-2607-4502-b20d-b579c9987b57"). InnerVolumeSpecName "kube-api-access-9x5w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.432361 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6d2129-7840-4dc5-941b-541507dfd482-kube-api-access-dcl56" (OuterVolumeSpecName: "kube-api-access-dcl56") pod "0d6d2129-7840-4dc5-941b-541507dfd482" (UID: "0d6d2129-7840-4dc5-941b-541507dfd482"). InnerVolumeSpecName "kube-api-access-dcl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.444963 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36456b90-3e11-4480-b235-5909103844ba" (UID: "36456b90-3e11-4480-b235-5909103844ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.466815 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1764268-02a2-46af-a94d-b9f32dabcab8" (UID: "e1764268-02a2-46af-a94d-b9f32dabcab8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508627 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508656 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcl56\" (UniqueName: \"kubernetes.io/projected/0d6d2129-7840-4dc5-941b-541507dfd482-kube-api-access-dcl56\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508666 4721 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12a4be20-2607-4502-b20d-b579c9987b57-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508678 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508688 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxjs9\" (UniqueName: \"kubernetes.io/projected/e1764268-02a2-46af-a94d-b9f32dabcab8-kube-api-access-rxjs9\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508698 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508707 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x5w6\" (UniqueName: \"kubernetes.io/projected/12a4be20-2607-4502-b20d-b579c9987b57-kube-api-access-9x5w6\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508718 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n8s9\" (UniqueName: \"kubernetes.io/projected/36456b90-3e11-4480-b235-5909103844ba-kube-api-access-4n8s9\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508728 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508738 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1764268-02a2-46af-a94d-b9f32dabcab8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.508748 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36456b90-3e11-4480-b235-5909103844ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.523568 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9khl"] Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.537809 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9khl"] Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.556450 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d6d2129-7840-4dc5-941b-541507dfd482" (UID: "0d6d2129-7840-4dc5-941b-541507dfd482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.609275 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6d2129-7840-4dc5-941b-541507dfd482-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:59 crc kubenswrapper[4721]: I0128 18:39:59.670303 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk9tw"] Jan 28 18:39:59 crc kubenswrapper[4721]: W0128 18:39:59.675349 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc24ece18_1c22_49c3_ae82_e63bdc44ab1f.slice/crio-91650e861e21d347bf93410262b3936ace010373f3d38946ca31dee15e2aa4c9 WatchSource:0}: Error finding container 91650e861e21d347bf93410262b3936ace010373f3d38946ca31dee15e2aa4c9: Status 404 returned error can't find the container with id 91650e861e21d347bf93410262b3936ace010373f3d38946ca31dee15e2aa4c9 Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.192102 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k9rr" event={"ID":"e1764268-02a2-46af-a94d-b9f32dabcab8","Type":"ContainerDied","Data":"d86539b904cf398da206fa5711a8e91c8bacb97f91c5f41384b2d81a9aa658ff"} Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.192474 4721 scope.go:117] "RemoveContainer" containerID="1566a298ab1642d61738f72e169d14efdfcd24f7361a3d0673531b65b4cf01ee" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.192378 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k9rr" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.193826 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.193825 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qp2vg" event={"ID":"12a4be20-2607-4502-b20d-b579c9987b57","Type":"ContainerDied","Data":"6bd932bc1b2a4628c85b6263fbcc02011e0361a67427e01a134b17e5b1dd21e6"} Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.196404 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4q59" event={"ID":"0d6d2129-7840-4dc5-941b-541507dfd482","Type":"ContainerDied","Data":"3927c821a13cd79edba1fd9b2f31e1d62752c4e94fd99e7904439dec8005ef1e"} Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.196419 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4q59" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.199339 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cql6x" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.199338 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cql6x" event={"ID":"36456b90-3e11-4480-b235-5909103844ba","Type":"ContainerDied","Data":"d1be8c0b94e5d18eaf97f1ee63331444ce8027842fd6e03ff7d77403e38f464c"} Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.201511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" event={"ID":"c24ece18-1c22-49c3-ae82-e63bdc44ab1f","Type":"ContainerStarted","Data":"bc2df783111100f76f7e8346824398428f1d44c17110287e613b4ab048c81a17"} Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.201555 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" event={"ID":"c24ece18-1c22-49c3-ae82-e63bdc44ab1f","Type":"ContainerStarted","Data":"91650e861e21d347bf93410262b3936ace010373f3d38946ca31dee15e2aa4c9"} Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.201752 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.209618 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qp2vg"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.213961 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qp2vg"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.217704 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.220351 4721 scope.go:117] "RemoveContainer" containerID="8400105d8ec8b30cb1e4583431a4b7b606cd879d7ee50f10f30f3e00ae655c58" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.231487 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6k9rr"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.238577 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6k9rr"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.248338 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4q59"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.251554 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4q59"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.263655 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dk9tw" podStartSLOduration=2.263632017 podStartE2EDuration="2.263632017s" podCreationTimestamp="2026-01-28 18:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:00.25685367 +0000 UTC m=+365.982159250" watchObservedRunningTime="2026-01-28 18:40:00.263632017 +0000 UTC m=+365.988937567" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.270893 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cql6x"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.271410 4721 scope.go:117] "RemoveContainer" containerID="8fb48a3b018e2a8308eafd384f4c56af40ce9007e976e65123da3dccd3b29cb4" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.279364 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cql6x"] Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.289013 4721 scope.go:117] "RemoveContainer" containerID="08a1c430094123ef1e41d846835baeba3e8a7084d6596d9d1b26cb47d7764fd6" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.309286 4721 scope.go:117] "RemoveContainer" containerID="2080634d0a8cd61ebf7da4bc9efe14cbada2488bf22a9018492716f920e1dad5" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.322408 4721 scope.go:117] "RemoveContainer" containerID="c8b821f0856f747c84b73bb7c7765244da2938b90db349c03f3d858d7d176847" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.339720 4721 scope.go:117] "RemoveContainer" containerID="a6f32e661da9b97ce626db3ca86097f8fa8ac7d805b8d350ccc96832ab330a95" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.360772 4721 scope.go:117] "RemoveContainer" containerID="b1c084b5ea1ca4855f89bf6c6c16d4e8214f0fa646e47650dfc5757bb8d21fc0" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.372179 4721 scope.go:117] "RemoveContainer" containerID="84984f7a09c278bbcbda1504ebba2e4b03e177c8aeb88f856330944b79632fb5" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.387437 4721 scope.go:117] "RemoveContainer" containerID="8b1368d82d594e2e8c675381a8cc7164c78bb1519332270fa656997eeae34e93" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.948926 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6zmqq"] Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949748 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949766 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949780 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949787 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949797 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949804 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949814 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949824 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949834 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949842 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949853 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949860 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949875 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949882 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949892 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949899 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949907 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949914 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949921 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949928 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949937 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949945 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="extract-content" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949954 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949961 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.949973 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.949981 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="extract-utilities" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950080 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950091 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950099 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950107 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950116 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950125 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="36456b90-3e11-4480-b235-5909103844ba" containerName="registry-server" Jan 28 18:40:00 crc kubenswrapper[4721]: E0128 18:40:00.950267 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.950279 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a4be20-2607-4502-b20d-b579c9987b57" containerName="marketplace-operator" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.951043 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.953391 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:40:00 crc kubenswrapper[4721]: I0128 18:40:00.961070 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zmqq"] Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.044726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwz9d\" (UniqueName: \"kubernetes.io/projected/73a3f613-b50c-4873-b63e-78983b1c60af-kube-api-access-nwz9d\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.045078 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a3f613-b50c-4873-b63e-78983b1c60af-catalog-content\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.045233 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a3f613-b50c-4873-b63e-78983b1c60af-utilities\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.145291 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nm7c2"] Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.146242 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a3f613-b50c-4873-b63e-78983b1c60af-catalog-content\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.146360 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.146375 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a3f613-b50c-4873-b63e-78983b1c60af-utilities\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.146431 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwz9d\" (UniqueName: \"kubernetes.io/projected/73a3f613-b50c-4873-b63e-78983b1c60af-kube-api-access-nwz9d\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.146911 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a3f613-b50c-4873-b63e-78983b1c60af-utilities\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.146916 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a3f613-b50c-4873-b63e-78983b1c60af-catalog-content\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.149468 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.164492 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nm7c2"] Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.181645 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwz9d\" (UniqueName: \"kubernetes.io/projected/73a3f613-b50c-4873-b63e-78983b1c60af-kube-api-access-nwz9d\") pod \"certified-operators-6zmqq\" (UID: \"73a3f613-b50c-4873-b63e-78983b1c60af\") " pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.225116 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.225183 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.247448 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-catalog-content\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.247500 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-utilities\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.247529 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m97v\" (UniqueName: \"kubernetes.io/projected/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-kube-api-access-2m97v\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.282606 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.349957 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-catalog-content\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.350000 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-utilities\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.350025 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m97v\" (UniqueName: \"kubernetes.io/projected/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-kube-api-access-2m97v\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.350750 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-catalog-content\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.350769 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-utilities\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.377249 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m97v\" (UniqueName: \"kubernetes.io/projected/53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe-kube-api-access-2m97v\") pod \"community-operators-nm7c2\" (UID: \"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe\") " pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.470406 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.540808 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6d2129-7840-4dc5-941b-541507dfd482" path="/var/lib/kubelet/pods/0d6d2129-7840-4dc5-941b-541507dfd482/volumes" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.541745 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a4be20-2607-4502-b20d-b579c9987b57" path="/var/lib/kubelet/pods/12a4be20-2607-4502-b20d-b579c9987b57/volumes" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.542262 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36456b90-3e11-4480-b235-5909103844ba" path="/var/lib/kubelet/pods/36456b90-3e11-4480-b235-5909103844ba/volumes" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.545523 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384e21cc-b8a7-4a62-b817-d985bde07d66" path="/var/lib/kubelet/pods/384e21cc-b8a7-4a62-b817-d985bde07d66/volumes" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.546251 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1764268-02a2-46af-a94d-b9f32dabcab8" path="/var/lib/kubelet/pods/e1764268-02a2-46af-a94d-b9f32dabcab8/volumes" Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.674970 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6zmqq"] Jan 28 18:40:01 crc kubenswrapper[4721]: I0128 18:40:01.872374 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nm7c2"] Jan 28 18:40:01 crc kubenswrapper[4721]: W0128 18:40:01.928428 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53ad6fb5_bf3c_4da3_af1c_72c1d1fa0bfe.slice/crio-944c0d0cc9b7d6b8e9a6695d4be6462708ad18a22bb8e592741f947880d512c7 WatchSource:0}: Error finding container 944c0d0cc9b7d6b8e9a6695d4be6462708ad18a22bb8e592741f947880d512c7: Status 404 returned error can't find the container with id 944c0d0cc9b7d6b8e9a6695d4be6462708ad18a22bb8e592741f947880d512c7 Jan 28 18:40:02 crc kubenswrapper[4721]: I0128 18:40:02.215971 4721 generic.go:334] "Generic (PLEG): container finished" podID="53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe" containerID="afe62e95207a80a907be0f6b971cba4923af0ad0136dfd15624cf2c34790b63b" exitCode=0 Jan 28 18:40:02 crc kubenswrapper[4721]: I0128 18:40:02.216070 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nm7c2" event={"ID":"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe","Type":"ContainerDied","Data":"afe62e95207a80a907be0f6b971cba4923af0ad0136dfd15624cf2c34790b63b"} Jan 28 18:40:02 crc kubenswrapper[4721]: I0128 18:40:02.216124 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nm7c2" event={"ID":"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe","Type":"ContainerStarted","Data":"944c0d0cc9b7d6b8e9a6695d4be6462708ad18a22bb8e592741f947880d512c7"} Jan 28 18:40:02 crc kubenswrapper[4721]: I0128 18:40:02.219796 4721 generic.go:334] "Generic (PLEG): container finished" podID="73a3f613-b50c-4873-b63e-78983b1c60af" containerID="c5e95ec516d320d96ec94c5c0da00de7dd08503a867aecf7d9369a3436c86735" exitCode=0 Jan 28 18:40:02 crc kubenswrapper[4721]: I0128 18:40:02.219864 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zmqq" event={"ID":"73a3f613-b50c-4873-b63e-78983b1c60af","Type":"ContainerDied","Data":"c5e95ec516d320d96ec94c5c0da00de7dd08503a867aecf7d9369a3436c86735"} Jan 28 18:40:02 crc kubenswrapper[4721]: I0128 18:40:02.219907 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zmqq" event={"ID":"73a3f613-b50c-4873-b63e-78983b1c60af","Type":"ContainerStarted","Data":"1dbb89761a64bc679bf16e3a7c2de2f77bec90f6c7fa911f08a7def0c07dfeb9"} Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.346204 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7trfs"] Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.347883 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.360828 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.366629 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7trfs"] Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.377558 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c22ad4-c5a1-4e52-accb-81598f08a144-catalog-content\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.377614 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mf4\" (UniqueName: \"kubernetes.io/projected/13c22ad4-c5a1-4e52-accb-81598f08a144-kube-api-access-s4mf4\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.377685 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c22ad4-c5a1-4e52-accb-81598f08a144-utilities\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.478757 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c22ad4-c5a1-4e52-accb-81598f08a144-catalog-content\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.478811 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mf4\" (UniqueName: \"kubernetes.io/projected/13c22ad4-c5a1-4e52-accb-81598f08a144-kube-api-access-s4mf4\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.478880 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c22ad4-c5a1-4e52-accb-81598f08a144-utilities\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.479398 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c22ad4-c5a1-4e52-accb-81598f08a144-catalog-content\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.479425 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c22ad4-c5a1-4e52-accb-81598f08a144-utilities\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.499465 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mf4\" (UniqueName: \"kubernetes.io/projected/13c22ad4-c5a1-4e52-accb-81598f08a144-kube-api-access-s4mf4\") pod \"redhat-marketplace-7trfs\" (UID: \"13c22ad4-c5a1-4e52-accb-81598f08a144\") " pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.542623 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdtqb"] Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.543569 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: W0128 18:40:03.545503 4721 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 28 18:40:03 crc kubenswrapper[4721]: E0128 18:40:03.545534 4721 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.559207 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdtqb"] Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.580240 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7rf\" (UniqueName: \"kubernetes.io/projected/025f6d5f-7086-4108-823a-10ef1b8b608d-kube-api-access-zc7rf\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.580334 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f6d5f-7086-4108-823a-10ef1b8b608d-utilities\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.580407 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f6d5f-7086-4108-823a-10ef1b8b608d-catalog-content\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.681282 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f6d5f-7086-4108-823a-10ef1b8b608d-utilities\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.681353 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f6d5f-7086-4108-823a-10ef1b8b608d-catalog-content\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.681418 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7rf\" (UniqueName: \"kubernetes.io/projected/025f6d5f-7086-4108-823a-10ef1b8b608d-kube-api-access-zc7rf\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.681822 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f6d5f-7086-4108-823a-10ef1b8b608d-utilities\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.681868 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f6d5f-7086-4108-823a-10ef1b8b608d-catalog-content\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.699412 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7rf\" (UniqueName: \"kubernetes.io/projected/025f6d5f-7086-4108-823a-10ef1b8b608d-kube-api-access-zc7rf\") pod \"redhat-operators-mdtqb\" (UID: \"025f6d5f-7086-4108-823a-10ef1b8b608d\") " pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:03 crc kubenswrapper[4721]: I0128 18:40:03.703341 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.093483 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7trfs"] Jan 28 18:40:04 crc kubenswrapper[4721]: W0128 18:40:04.099556 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c22ad4_c5a1_4e52_accb_81598f08a144.slice/crio-3c798679b2607da7ddb29f0b1ab672ffe0779133ac8755fc83cbcef34d4424ee WatchSource:0}: Error finding container 3c798679b2607da7ddb29f0b1ab672ffe0779133ac8755fc83cbcef34d4424ee: Status 404 returned error can't find the container with id 3c798679b2607da7ddb29f0b1ab672ffe0779133ac8755fc83cbcef34d4424ee Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.238534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7trfs" event={"ID":"13c22ad4-c5a1-4e52-accb-81598f08a144","Type":"ContainerStarted","Data":"b9b93e6ea103279b5a43f0d54777edec40917f668065d8c41f7b9cf46a60252c"} Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.238898 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7trfs" event={"ID":"13c22ad4-c5a1-4e52-accb-81598f08a144","Type":"ContainerStarted","Data":"3c798679b2607da7ddb29f0b1ab672ffe0779133ac8755fc83cbcef34d4424ee"} Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.240292 4721 generic.go:334] "Generic (PLEG): container finished" podID="53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe" containerID="4fbe52132e17a063342e9430a5dcbd845b052b1f615c36f264fc427d4a8bef9b" exitCode=0 Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.240397 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nm7c2" event={"ID":"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe","Type":"ContainerDied","Data":"4fbe52132e17a063342e9430a5dcbd845b052b1f615c36f264fc427d4a8bef9b"} Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.245979 4721 generic.go:334] "Generic (PLEG): container finished" podID="73a3f613-b50c-4873-b63e-78983b1c60af" containerID="24d341c56aae96eeb2959e646c93a3985400b040e663520ec4876bac96a1b429" exitCode=0 Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.245872 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zmqq" event={"ID":"73a3f613-b50c-4873-b63e-78983b1c60af","Type":"ContainerDied","Data":"24d341c56aae96eeb2959e646c93a3985400b040e663520ec4876bac96a1b429"} Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.738110 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:40:04 crc kubenswrapper[4721]: I0128 18:40:04.740361 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.157757 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdtqb"] Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.260345 4721 generic.go:334] "Generic (PLEG): container finished" podID="13c22ad4-c5a1-4e52-accb-81598f08a144" containerID="b9b93e6ea103279b5a43f0d54777edec40917f668065d8c41f7b9cf46a60252c" exitCode=0 Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.260384 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7trfs" event={"ID":"13c22ad4-c5a1-4e52-accb-81598f08a144","Type":"ContainerDied","Data":"b9b93e6ea103279b5a43f0d54777edec40917f668065d8c41f7b9cf46a60252c"} Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.263718 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nm7c2" event={"ID":"53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe","Type":"ContainerStarted","Data":"ae252e170612e19f0916a29251daf60c25765b011259169e394a5c710cc21c0d"} Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.266202 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdtqb" event={"ID":"025f6d5f-7086-4108-823a-10ef1b8b608d","Type":"ContainerStarted","Data":"3adbf9935d8d5f1dddc1f29e856ad51b07f3446adffefc54e801d9d24debe384"} Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.270123 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6zmqq" event={"ID":"73a3f613-b50c-4873-b63e-78983b1c60af","Type":"ContainerStarted","Data":"e462c18517f9decf65e1821db2d4e7768c1bb00616ad6f2d47c67f355781e9ff"} Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.307038 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nm7c2" podStartSLOduration=1.431506591 podStartE2EDuration="4.30701738s" podCreationTimestamp="2026-01-28 18:40:01 +0000 UTC" firstStartedPulling="2026-01-28 18:40:02.217759268 +0000 UTC m=+367.943064828" lastFinishedPulling="2026-01-28 18:40:05.093270057 +0000 UTC m=+370.818575617" observedRunningTime="2026-01-28 18:40:05.301948568 +0000 UTC m=+371.027254148" watchObservedRunningTime="2026-01-28 18:40:05.30701738 +0000 UTC m=+371.032322940" Jan 28 18:40:05 crc kubenswrapper[4721]: I0128 18:40:05.326734 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6zmqq" podStartSLOduration=2.807580302 podStartE2EDuration="5.326711151s" podCreationTimestamp="2026-01-28 18:40:00 +0000 UTC" firstStartedPulling="2026-01-28 18:40:02.221784847 +0000 UTC m=+367.947090407" lastFinishedPulling="2026-01-28 18:40:04.740915696 +0000 UTC m=+370.466221256" observedRunningTime="2026-01-28 18:40:05.323750336 +0000 UTC m=+371.049055896" watchObservedRunningTime="2026-01-28 18:40:05.326711151 +0000 UTC m=+371.052016711" Jan 28 18:40:06 crc kubenswrapper[4721]: I0128 18:40:06.276521 4721 generic.go:334] "Generic (PLEG): container finished" podID="13c22ad4-c5a1-4e52-accb-81598f08a144" containerID="3ca67fef0c33a9b482557e0261d6ae1cc4354eb2308edd619ca9b79a85045918" exitCode=0 Jan 28 18:40:06 crc kubenswrapper[4721]: I0128 18:40:06.276615 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7trfs" event={"ID":"13c22ad4-c5a1-4e52-accb-81598f08a144","Type":"ContainerDied","Data":"3ca67fef0c33a9b482557e0261d6ae1cc4354eb2308edd619ca9b79a85045918"} Jan 28 18:40:06 crc kubenswrapper[4721]: I0128 18:40:06.278254 4721 generic.go:334] "Generic (PLEG): container finished" podID="025f6d5f-7086-4108-823a-10ef1b8b608d" containerID="07dcebd2a7eee403148b096d611be21729608a3add5e627c7b8a882be3838d11" exitCode=0 Jan 28 18:40:06 crc kubenswrapper[4721]: I0128 18:40:06.278516 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdtqb" event={"ID":"025f6d5f-7086-4108-823a-10ef1b8b608d","Type":"ContainerDied","Data":"07dcebd2a7eee403148b096d611be21729608a3add5e627c7b8a882be3838d11"} Jan 28 18:40:07 crc kubenswrapper[4721]: I0128 18:40:07.291556 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7trfs" event={"ID":"13c22ad4-c5a1-4e52-accb-81598f08a144","Type":"ContainerStarted","Data":"aa2ce6c9c8d5b6f5ae9db79fdf51ac2a19afabe4a6ba0cc2c826bc262255b532"} Jan 28 18:40:07 crc kubenswrapper[4721]: I0128 18:40:07.294757 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdtqb" event={"ID":"025f6d5f-7086-4108-823a-10ef1b8b608d","Type":"ContainerStarted","Data":"d621a53672d7c63266298184d402de62f8af3e1d41e6ea7545158198f54fa0b8"} Jan 28 18:40:07 crc kubenswrapper[4721]: I0128 18:40:07.313716 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7trfs" podStartSLOduration=2.894263481 podStartE2EDuration="4.313675323s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.262447714 +0000 UTC m=+370.987753284" lastFinishedPulling="2026-01-28 18:40:06.681859566 +0000 UTC m=+372.407165126" observedRunningTime="2026-01-28 18:40:07.30827442 +0000 UTC m=+373.033579990" watchObservedRunningTime="2026-01-28 18:40:07.313675323 +0000 UTC m=+373.038980893" Jan 28 18:40:08 crc kubenswrapper[4721]: I0128 18:40:08.304998 4721 generic.go:334] "Generic (PLEG): container finished" podID="025f6d5f-7086-4108-823a-10ef1b8b608d" containerID="d621a53672d7c63266298184d402de62f8af3e1d41e6ea7545158198f54fa0b8" exitCode=0 Jan 28 18:40:08 crc kubenswrapper[4721]: I0128 18:40:08.305094 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdtqb" event={"ID":"025f6d5f-7086-4108-823a-10ef1b8b608d","Type":"ContainerDied","Data":"d621a53672d7c63266298184d402de62f8af3e1d41e6ea7545158198f54fa0b8"} Jan 28 18:40:10 crc kubenswrapper[4721]: I0128 18:40:10.319820 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdtqb" event={"ID":"025f6d5f-7086-4108-823a-10ef1b8b608d","Type":"ContainerStarted","Data":"eb7c73b4b5d2fbc81c14cab95a4f126ac32188146f4fe52f3e04989d2fbe3771"} Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.283707 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.283758 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.320597 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.338510 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdtqb" podStartSLOduration=5.816489346 podStartE2EDuration="8.338492728s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:06.279735302 +0000 UTC m=+372.005040862" lastFinishedPulling="2026-01-28 18:40:08.801738684 +0000 UTC m=+374.527044244" observedRunningTime="2026-01-28 18:40:10.338533224 +0000 UTC m=+376.063838804" watchObservedRunningTime="2026-01-28 18:40:11.338492728 +0000 UTC m=+377.063798288" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.368737 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6zmqq" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.470576 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.471688 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:11 crc kubenswrapper[4721]: I0128 18:40:11.580069 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:12 crc kubenswrapper[4721]: I0128 18:40:12.370615 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nm7c2" Jan 28 18:40:13 crc kubenswrapper[4721]: I0128 18:40:13.704344 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:13 crc kubenswrapper[4721]: I0128 18:40:13.704650 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:13 crc kubenswrapper[4721]: I0128 18:40:13.748226 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:14 crc kubenswrapper[4721]: I0128 18:40:14.395904 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7trfs" Jan 28 18:40:14 crc kubenswrapper[4721]: I0128 18:40:14.740836 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:14 crc kubenswrapper[4721]: I0128 18:40:14.741207 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:15 crc kubenswrapper[4721]: I0128 18:40:15.777144 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdtqb" podUID="025f6d5f-7086-4108-823a-10ef1b8b608d" containerName="registry-server" probeResult="failure" output=< Jan 28 18:40:15 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:40:15 crc kubenswrapper[4721]: > Jan 28 18:40:24 crc kubenswrapper[4721]: I0128 18:40:24.778052 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:24 crc kubenswrapper[4721]: I0128 18:40:24.821668 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdtqb" Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.225047 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.225438 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.225490 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.226056 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10860096ec91e5eac0dde1e9c86fd3c5c5e845b25209bb97d51e42151804a191"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.226112 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://10860096ec91e5eac0dde1e9c86fd3c5c5e845b25209bb97d51e42151804a191" gracePeriod=600 Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.441650 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="10860096ec91e5eac0dde1e9c86fd3c5c5e845b25209bb97d51e42151804a191" exitCode=0 Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.441722 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"10860096ec91e5eac0dde1e9c86fd3c5c5e845b25209bb97d51e42151804a191"} Jan 28 18:40:31 crc kubenswrapper[4721]: I0128 18:40:31.442092 4721 scope.go:117] "RemoveContainer" containerID="bbd26672afb8ed608228f3f2101f430b1eaa5ccea0e8074fad597dec347c4522" Jan 28 18:40:32 crc kubenswrapper[4721]: I0128 18:40:32.448916 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"760f911b45297553f15fd5d7594848accfaf0eb2624491e25ca92b5519181df7"} Jan 28 18:42:31 crc kubenswrapper[4721]: I0128 18:42:31.224622 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:42:31 crc kubenswrapper[4721]: I0128 18:42:31.225360 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:43:01 crc kubenswrapper[4721]: I0128 18:43:01.224781 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:43:01 crc kubenswrapper[4721]: I0128 18:43:01.226280 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.224829 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.225517 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.225714 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.226308 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"760f911b45297553f15fd5d7594848accfaf0eb2624491e25ca92b5519181df7"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.226383 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://760f911b45297553f15fd5d7594848accfaf0eb2624491e25ca92b5519181df7" gracePeriod=600 Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.787825 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="760f911b45297553f15fd5d7594848accfaf0eb2624491e25ca92b5519181df7" exitCode=0 Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.787953 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"760f911b45297553f15fd5d7594848accfaf0eb2624491e25ca92b5519181df7"} Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.788511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"1d9cb44706b2f5923bc65487fc2d438c7475d17f3368442164e195f17c4693d2"} Jan 28 18:43:31 crc kubenswrapper[4721]: I0128 18:43:31.788539 4721 scope.go:117] "RemoveContainer" containerID="10860096ec91e5eac0dde1e9c86fd3c5c5e845b25209bb97d51e42151804a191" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.185328 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8"] Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.187541 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.190432 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.193703 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.195431 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8"] Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.306818 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctm8p\" (UniqueName: \"kubernetes.io/projected/16161beb-545f-4539-975b-4b48264e4189-kube-api-access-ctm8p\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.307200 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16161beb-545f-4539-975b-4b48264e4189-config-volume\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.307312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16161beb-545f-4539-975b-4b48264e4189-secret-volume\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.408853 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctm8p\" (UniqueName: \"kubernetes.io/projected/16161beb-545f-4539-975b-4b48264e4189-kube-api-access-ctm8p\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.409270 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16161beb-545f-4539-975b-4b48264e4189-config-volume\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.409498 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16161beb-545f-4539-975b-4b48264e4189-secret-volume\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.411273 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16161beb-545f-4539-975b-4b48264e4189-config-volume\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.417114 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16161beb-545f-4539-975b-4b48264e4189-secret-volume\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.428448 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctm8p\" (UniqueName: \"kubernetes.io/projected/16161beb-545f-4539-975b-4b48264e4189-kube-api-access-ctm8p\") pod \"collect-profiles-29493765-5hjw8\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.504017 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:00 crc kubenswrapper[4721]: I0128 18:45:00.692755 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8"] Jan 28 18:45:01 crc kubenswrapper[4721]: I0128 18:45:01.269534 4721 generic.go:334] "Generic (PLEG): container finished" podID="16161beb-545f-4539-975b-4b48264e4189" containerID="892bfb296a65ce9869dc777c199aa356e653584535e7e0ec44f2ff7ba4c24f9b" exitCode=0 Jan 28 18:45:01 crc kubenswrapper[4721]: I0128 18:45:01.269880 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" event={"ID":"16161beb-545f-4539-975b-4b48264e4189","Type":"ContainerDied","Data":"892bfb296a65ce9869dc777c199aa356e653584535e7e0ec44f2ff7ba4c24f9b"} Jan 28 18:45:01 crc kubenswrapper[4721]: I0128 18:45:01.269910 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" event={"ID":"16161beb-545f-4539-975b-4b48264e4189","Type":"ContainerStarted","Data":"9abd6fb2bece7fddd59ddae0454a973fb2fcafd6691f44ee1908203220e01e66"} Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.482933 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.636721 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16161beb-545f-4539-975b-4b48264e4189-secret-volume\") pod \"16161beb-545f-4539-975b-4b48264e4189\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.636900 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctm8p\" (UniqueName: \"kubernetes.io/projected/16161beb-545f-4539-975b-4b48264e4189-kube-api-access-ctm8p\") pod \"16161beb-545f-4539-975b-4b48264e4189\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.637003 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16161beb-545f-4539-975b-4b48264e4189-config-volume\") pod \"16161beb-545f-4539-975b-4b48264e4189\" (UID: \"16161beb-545f-4539-975b-4b48264e4189\") " Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.637702 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16161beb-545f-4539-975b-4b48264e4189-config-volume" (OuterVolumeSpecName: "config-volume") pod "16161beb-545f-4539-975b-4b48264e4189" (UID: "16161beb-545f-4539-975b-4b48264e4189"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.641790 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16161beb-545f-4539-975b-4b48264e4189-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16161beb-545f-4539-975b-4b48264e4189" (UID: "16161beb-545f-4539-975b-4b48264e4189"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.641871 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16161beb-545f-4539-975b-4b48264e4189-kube-api-access-ctm8p" (OuterVolumeSpecName: "kube-api-access-ctm8p") pod "16161beb-545f-4539-975b-4b48264e4189" (UID: "16161beb-545f-4539-975b-4b48264e4189"). InnerVolumeSpecName "kube-api-access-ctm8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.738097 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16161beb-545f-4539-975b-4b48264e4189-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.738138 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16161beb-545f-4539-975b-4b48264e4189-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:02 crc kubenswrapper[4721]: I0128 18:45:02.738149 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctm8p\" (UniqueName: \"kubernetes.io/projected/16161beb-545f-4539-975b-4b48264e4189-kube-api-access-ctm8p\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:03 crc kubenswrapper[4721]: I0128 18:45:03.282156 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" event={"ID":"16161beb-545f-4539-975b-4b48264e4189","Type":"ContainerDied","Data":"9abd6fb2bece7fddd59ddae0454a973fb2fcafd6691f44ee1908203220e01e66"} Jan 28 18:45:03 crc kubenswrapper[4721]: I0128 18:45:03.282234 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9abd6fb2bece7fddd59ddae0454a973fb2fcafd6691f44ee1908203220e01e66" Jan 28 18:45:03 crc kubenswrapper[4721]: I0128 18:45:03.282668 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8" Jan 28 18:45:06 crc kubenswrapper[4721]: I0128 18:45:06.969617 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6rmcr"] Jan 28 18:45:06 crc kubenswrapper[4721]: E0128 18:45:06.970267 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16161beb-545f-4539-975b-4b48264e4189" containerName="collect-profiles" Jan 28 18:45:06 crc kubenswrapper[4721]: I0128 18:45:06.970284 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="16161beb-545f-4539-975b-4b48264e4189" containerName="collect-profiles" Jan 28 18:45:06 crc kubenswrapper[4721]: I0128 18:45:06.970417 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="16161beb-545f-4539-975b-4b48264e4189" containerName="collect-profiles" Jan 28 18:45:06 crc kubenswrapper[4721]: I0128 18:45:06.970871 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:06 crc kubenswrapper[4721]: I0128 18:45:06.987984 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6rmcr"] Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097016 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3ef2fb3-524e-41da-be98-0959e6116ee7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097121 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3ef2fb3-524e-41da-be98-0959e6116ee7-registry-certificates\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097292 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-bound-sa-token\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097371 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3ef2fb3-524e-41da-be98-0959e6116ee7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097411 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ef2fb3-524e-41da-be98-0959e6116ee7-trusted-ca\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097446 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pslrq\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-kube-api-access-pslrq\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.097482 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-registry-tls\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.126437 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200472 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3ef2fb3-524e-41da-be98-0959e6116ee7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200557 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ef2fb3-524e-41da-be98-0959e6116ee7-trusted-ca\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200585 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pslrq\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-kube-api-access-pslrq\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200611 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-registry-tls\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200640 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3ef2fb3-524e-41da-be98-0959e6116ee7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3ef2fb3-524e-41da-be98-0959e6116ee7-registry-certificates\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.200722 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-bound-sa-token\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.201587 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3ef2fb3-524e-41da-be98-0959e6116ee7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.202733 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ef2fb3-524e-41da-be98-0959e6116ee7-trusted-ca\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.207493 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-registry-tls\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.208584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3ef2fb3-524e-41da-be98-0959e6116ee7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.208943 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3ef2fb3-524e-41da-be98-0959e6116ee7-registry-certificates\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.221446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-bound-sa-token\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.226606 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pslrq\" (UniqueName: \"kubernetes.io/projected/c3ef2fb3-524e-41da-be98-0959e6116ee7-kube-api-access-pslrq\") pod \"image-registry-66df7c8f76-6rmcr\" (UID: \"c3ef2fb3-524e-41da-be98-0959e6116ee7\") " pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.287944 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:07 crc kubenswrapper[4721]: I0128 18:45:07.674312 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6rmcr"] Jan 28 18:45:08 crc kubenswrapper[4721]: I0128 18:45:08.307761 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" event={"ID":"c3ef2fb3-524e-41da-be98-0959e6116ee7","Type":"ContainerStarted","Data":"d4c35b0b5c8ff261521a163761a3f5e6ece85a744e3bb89c9095c8d98c4ca478"} Jan 28 18:45:08 crc kubenswrapper[4721]: I0128 18:45:08.308085 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:08 crc kubenswrapper[4721]: I0128 18:45:08.308098 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" event={"ID":"c3ef2fb3-524e-41da-be98-0959e6116ee7","Type":"ContainerStarted","Data":"94560be69b403a9352603f0d7d3820c6855cd29864fa24794a1774405715fd9b"} Jan 28 18:45:08 crc kubenswrapper[4721]: I0128 18:45:08.325632 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" podStartSLOduration=2.325612379 podStartE2EDuration="2.325612379s" podCreationTimestamp="2026-01-28 18:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:45:08.325271458 +0000 UTC m=+674.050577028" watchObservedRunningTime="2026-01-28 18:45:08.325612379 +0000 UTC m=+674.050917939" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.742344 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht"] Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.744584 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.747214 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.757708 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht"] Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.823081 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.823486 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd27w\" (UniqueName: \"kubernetes.io/projected/e3e10f04-ed38-4461-a28c-b53f458cd84d-kube-api-access-vd27w\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.823602 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.925295 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.925925 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd27w\" (UniqueName: \"kubernetes.io/projected/e3e10f04-ed38-4461-a28c-b53f458cd84d-kube-api-access-vd27w\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.926019 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.925963 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.926486 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:22 crc kubenswrapper[4721]: I0128 18:45:22.945956 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd27w\" (UniqueName: \"kubernetes.io/projected/e3e10f04-ed38-4461-a28c-b53f458cd84d-kube-api-access-vd27w\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:23 crc kubenswrapper[4721]: I0128 18:45:23.065157 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:23 crc kubenswrapper[4721]: I0128 18:45:23.251678 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht"] Jan 28 18:45:23 crc kubenswrapper[4721]: I0128 18:45:23.389352 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" event={"ID":"e3e10f04-ed38-4461-a28c-b53f458cd84d","Type":"ContainerStarted","Data":"e303a5315d44461141a3915c39cffc1af28c712b6356574bf45e61d52bce09eb"} Jan 28 18:45:24 crc kubenswrapper[4721]: I0128 18:45:24.396811 4721 generic.go:334] "Generic (PLEG): container finished" podID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerID="722be4fcb9104d5335fb0563c6b9b7fd897c4fc6873746e2ff2ec1f646f635d6" exitCode=0 Jan 28 18:45:24 crc kubenswrapper[4721]: I0128 18:45:24.396878 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" event={"ID":"e3e10f04-ed38-4461-a28c-b53f458cd84d","Type":"ContainerDied","Data":"722be4fcb9104d5335fb0563c6b9b7fd897c4fc6873746e2ff2ec1f646f635d6"} Jan 28 18:45:24 crc kubenswrapper[4721]: I0128 18:45:24.402140 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:45:26 crc kubenswrapper[4721]: I0128 18:45:26.410968 4721 generic.go:334] "Generic (PLEG): container finished" podID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerID="7c41fec241824aab6d0d4f9b997b0eb6ecb6438ac0b46ec7ae611761f26bc509" exitCode=0 Jan 28 18:45:26 crc kubenswrapper[4721]: I0128 18:45:26.411045 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" event={"ID":"e3e10f04-ed38-4461-a28c-b53f458cd84d","Type":"ContainerDied","Data":"7c41fec241824aab6d0d4f9b997b0eb6ecb6438ac0b46ec7ae611761f26bc509"} Jan 28 18:45:27 crc kubenswrapper[4721]: I0128 18:45:27.294795 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6rmcr" Jan 28 18:45:27 crc kubenswrapper[4721]: I0128 18:45:27.349753 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b42n2"] Jan 28 18:45:27 crc kubenswrapper[4721]: I0128 18:45:27.420219 4721 generic.go:334] "Generic (PLEG): container finished" podID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerID="089b42ce381c3ac21d86b305b98f583a8dcc66cd4c2d4ae1731defb53305e027" exitCode=0 Jan 28 18:45:27 crc kubenswrapper[4721]: I0128 18:45:27.420281 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" event={"ID":"e3e10f04-ed38-4461-a28c-b53f458cd84d","Type":"ContainerDied","Data":"089b42ce381c3ac21d86b305b98f583a8dcc66cd4c2d4ae1731defb53305e027"} Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.680480 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.840002 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-util\") pod \"e3e10f04-ed38-4461-a28c-b53f458cd84d\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.840263 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-bundle\") pod \"e3e10f04-ed38-4461-a28c-b53f458cd84d\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.840307 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd27w\" (UniqueName: \"kubernetes.io/projected/e3e10f04-ed38-4461-a28c-b53f458cd84d-kube-api-access-vd27w\") pod \"e3e10f04-ed38-4461-a28c-b53f458cd84d\" (UID: \"e3e10f04-ed38-4461-a28c-b53f458cd84d\") " Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.843070 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-bundle" (OuterVolumeSpecName: "bundle") pod "e3e10f04-ed38-4461-a28c-b53f458cd84d" (UID: "e3e10f04-ed38-4461-a28c-b53f458cd84d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.847232 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e10f04-ed38-4461-a28c-b53f458cd84d-kube-api-access-vd27w" (OuterVolumeSpecName: "kube-api-access-vd27w") pod "e3e10f04-ed38-4461-a28c-b53f458cd84d" (UID: "e3e10f04-ed38-4461-a28c-b53f458cd84d"). InnerVolumeSpecName "kube-api-access-vd27w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.852811 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-util" (OuterVolumeSpecName: "util") pod "e3e10f04-ed38-4461-a28c-b53f458cd84d" (UID: "e3e10f04-ed38-4461-a28c-b53f458cd84d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.942209 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.942550 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3e10f04-ed38-4461-a28c-b53f458cd84d-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:28 crc kubenswrapper[4721]: I0128 18:45:28.942625 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd27w\" (UniqueName: \"kubernetes.io/projected/e3e10f04-ed38-4461-a28c-b53f458cd84d-kube-api-access-vd27w\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:29 crc kubenswrapper[4721]: I0128 18:45:29.432394 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" event={"ID":"e3e10f04-ed38-4461-a28c-b53f458cd84d","Type":"ContainerDied","Data":"e303a5315d44461141a3915c39cffc1af28c712b6356574bf45e61d52bce09eb"} Jan 28 18:45:29 crc kubenswrapper[4721]: I0128 18:45:29.432446 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e303a5315d44461141a3915c39cffc1af28c712b6356574bf45e61d52bce09eb" Jan 28 18:45:29 crc kubenswrapper[4721]: I0128 18:45:29.432544 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht" Jan 28 18:45:31 crc kubenswrapper[4721]: I0128 18:45:31.225439 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:45:31 crc kubenswrapper[4721]: I0128 18:45:31.226076 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.901727 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr282"] Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902583 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-controller" containerID="cri-o://44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902654 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="nbdb" containerID="cri-o://b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902701 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-node" containerID="cri-o://9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902682 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902738 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-acl-logging" containerID="cri-o://11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902873 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="northd" containerID="cri-o://7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.902968 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="sbdb" containerID="cri-o://373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" gracePeriod=30 Jan 28 18:45:33 crc kubenswrapper[4721]: I0128 18:45:33.932510 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" containerID="cri-o://7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" gracePeriod=30 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.346149 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/3.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.350089 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovn-acl-logging/0.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.350838 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovn-controller/0.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.351575 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421110 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vhgbz"] Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421383 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="sbdb" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421400 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="sbdb" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421414 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421422 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421432 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421439 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421447 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="northd" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421455 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="northd" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421464 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-node" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421471 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-node" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421486 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kubecfg-setup" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421493 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kubecfg-setup" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421502 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="extract" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421509 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="extract" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421523 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421529 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421538 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="nbdb" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421545 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="nbdb" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421555 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="util" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421562 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="util" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421572 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="pull" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421579 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="pull" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421591 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421598 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421613 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421620 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421631 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-acl-logging" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421639 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-acl-logging" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421748 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421762 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="northd" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421773 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="kube-rbac-proxy-node" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421782 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421791 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="sbdb" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421802 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421814 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="nbdb" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421823 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421832 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovn-acl-logging" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421840 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421850 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e10f04-ed38-4461-a28c-b53f458cd84d" containerName="extract" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421972 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.421982 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.421997 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.422004 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.422139 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.422154 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70686e42-b434-4ff9-9753-cfc870beef82" containerName="ovnkube-controller" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.424161 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.481621 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/2.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.482333 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/1.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.482387 4721 generic.go:334] "Generic (PLEG): container finished" podID="c0a22020-3f34-4895-beec-2ed5d829ea79" containerID="09078904e276a9f5eb4aafabbe371ff67e22dd1b352aa67825ea2de56709d503" exitCode=2 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.482452 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerDied","Data":"09078904e276a9f5eb4aafabbe371ff67e22dd1b352aa67825ea2de56709d503"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.482505 4721 scope.go:117] "RemoveContainer" containerID="2588c3d36133bd9b96114f5d12622916ac785bea9be47d12a3d76d8585c3e0ab" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.483865 4721 scope.go:117] "RemoveContainer" containerID="09078904e276a9f5eb4aafabbe371ff67e22dd1b352aa67825ea2de56709d503" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.484464 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rgqdt_openshift-multus(c0a22020-3f34-4895-beec-2ed5d829ea79)\"" pod="openshift-multus/multus-rgqdt" podUID="c0a22020-3f34-4895-beec-2ed5d829ea79" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.488238 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovnkube-controller/3.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.495814 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovn-acl-logging/0.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.496538 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr282_70686e42-b434-4ff9-9753-cfc870beef82/ovn-controller/0.log" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497429 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" exitCode=0 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497465 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" exitCode=0 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497476 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" exitCode=0 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497486 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" exitCode=0 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497495 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" exitCode=0 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497502 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" exitCode=0 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497509 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" exitCode=143 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497519 4721 generic.go:334] "Generic (PLEG): container finished" podID="70686e42-b434-4ff9-9753-cfc870beef82" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" exitCode=143 Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497541 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497577 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497592 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497603 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497613 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497624 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497636 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497647 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497653 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497659 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497664 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497670 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497675 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497680 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497686 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497691 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497698 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497707 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497715 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497721 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497729 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497734 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497740 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497746 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497753 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497761 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497768 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497776 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497792 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497798 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497803 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497809 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497814 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497819 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497824 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497830 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497835 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497840 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497847 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" event={"ID":"70686e42-b434-4ff9-9753-cfc870beef82","Type":"ContainerDied","Data":"f09b4c32b88c09bbbda6325a1c46dc1a2127a8c6ad924249908667da133345b2"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497856 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497863 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497868 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497876 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497881 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497886 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497893 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497900 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497906 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.497911 4721 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.498026 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr282" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.512465 4721 scope.go:117] "RemoveContainer" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531779 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-ovn\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531837 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-netd\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lkbp\" (UniqueName: \"kubernetes.io/projected/70686e42-b434-4ff9-9753-cfc870beef82-kube-api-access-7lkbp\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531906 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-script-lib\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531903 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531947 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-systemd\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531969 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-node-log\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.531996 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-openvswitch\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532019 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-slash\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532053 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-netns\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532083 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70686e42-b434-4ff9-9753-cfc870beef82-ovn-node-metrics-cert\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532106 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-ovn-kubernetes\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532124 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-log-socket\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532147 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-var-lib-openvswitch\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532186 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532227 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-config\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532247 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-env-overrides\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532275 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-systemd-units\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532305 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-bin\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532300 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532342 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-kubelet\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532371 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-etc-openvswitch\") pod \"70686e42-b434-4ff9-9753-cfc870beef82\" (UID: \"70686e42-b434-4ff9-9753-cfc870beef82\") " Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532542 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532557 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-cni-netd\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532680 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovnkube-config\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532773 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-systemd\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532805 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-env-overrides\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532921 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95spk\" (UniqueName: \"kubernetes.io/projected/80833028-6365-4e88-80dc-98b4bcd4dbe6-kube-api-access-95spk\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.532956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-systemd-units\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533010 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-node-log\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533050 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-run-ovn-kubernetes\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533073 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovnkube-script-lib\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533106 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533138 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-slash\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533208 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-etc-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533256 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-ovn\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533302 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-cni-bin\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533350 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533371 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533404 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533420 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-node-log" (OuterVolumeSpecName: "node-log") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-log-socket\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533452 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533494 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533510 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-slash" (OuterVolumeSpecName: "host-slash") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533539 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533544 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-log-socket" (OuterVolumeSpecName: "log-socket") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533579 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533617 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-run-netns\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533651 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-kubelet\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533614 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533720 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovn-node-metrics-cert\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533773 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-var-lib-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533883 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533934 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.533964 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534200 4721 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534225 4721 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534237 4721 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534249 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534253 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534262 4721 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534345 4721 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534364 4721 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534377 4721 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.534714 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.544816 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70686e42-b434-4ff9-9753-cfc870beef82-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.545826 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70686e42-b434-4ff9-9753-cfc870beef82-kube-api-access-7lkbp" (OuterVolumeSpecName: "kube-api-access-7lkbp") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "kube-api-access-7lkbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.556630 4721 scope.go:117] "RemoveContainer" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.565315 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "70686e42-b434-4ff9-9753-cfc870beef82" (UID: "70686e42-b434-4ff9-9753-cfc870beef82"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.585434 4721 scope.go:117] "RemoveContainer" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.611002 4721 scope.go:117] "RemoveContainer" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.632495 4721 scope.go:117] "RemoveContainer" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636013 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-cni-bin\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636061 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636099 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636121 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-log-socket\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636150 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-run-netns\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636206 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-kubelet\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636211 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-cni-bin\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636246 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovn-node-metrics-cert\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636293 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-log-socket\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636307 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-var-lib-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636363 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-var-lib-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636389 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-cni-netd\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636400 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-run-netns\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636430 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-kubelet\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636458 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovnkube-config\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636532 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-systemd\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636571 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-env-overrides\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636707 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-systemd-units\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636742 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95spk\" (UniqueName: \"kubernetes.io/projected/80833028-6365-4e88-80dc-98b4bcd4dbe6-kube-api-access-95spk\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636782 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-node-log\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636823 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-run-ovn-kubernetes\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovnkube-script-lib\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636918 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-slash\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.636989 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-etc-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637025 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-ovn\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637145 4721 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637188 4721 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637203 4721 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637218 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lkbp\" (UniqueName: \"kubernetes.io/projected/70686e42-b434-4ff9-9753-cfc870beef82-kube-api-access-7lkbp\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637235 4721 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637251 4721 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637263 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70686e42-b434-4ff9-9753-cfc870beef82-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637276 4721 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637289 4721 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637303 4721 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70686e42-b434-4ff9-9753-cfc870beef82-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637319 4721 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637333 4721 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70686e42-b434-4ff9-9753-cfc870beef82-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-ovn\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637414 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637447 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637476 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-cni-netd\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637510 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-node-log\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.637540 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-run-ovn-kubernetes\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.638483 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-etc-openvswitch\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.638497 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-host-slash\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.638542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovnkube-config\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.638628 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-systemd-units\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.638679 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80833028-6365-4e88-80dc-98b4bcd4dbe6-run-systemd\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.639121 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-env-overrides\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.639403 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovnkube-script-lib\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.644110 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80833028-6365-4e88-80dc-98b4bcd4dbe6-ovn-node-metrics-cert\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.671345 4721 scope.go:117] "RemoveContainer" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.671888 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95spk\" (UniqueName: \"kubernetes.io/projected/80833028-6365-4e88-80dc-98b4bcd4dbe6-kube-api-access-95spk\") pod \"ovnkube-node-vhgbz\" (UID: \"80833028-6365-4e88-80dc-98b4bcd4dbe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.694278 4721 scope.go:117] "RemoveContainer" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.714448 4721 scope.go:117] "RemoveContainer" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.733345 4721 scope.go:117] "RemoveContainer" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.739619 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.775634 4721 scope.go:117] "RemoveContainer" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.776131 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": container with ID starting with 7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16 not found: ID does not exist" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.776210 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} err="failed to get container status \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": rpc error: code = NotFound desc = could not find container \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": container with ID starting with 7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.776254 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.777087 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": container with ID starting with 693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11 not found: ID does not exist" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.777117 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} err="failed to get container status \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": rpc error: code = NotFound desc = could not find container \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": container with ID starting with 693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.777142 4721 scope.go:117] "RemoveContainer" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.777412 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": container with ID starting with 373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2 not found: ID does not exist" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.777458 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} err="failed to get container status \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": rpc error: code = NotFound desc = could not find container \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": container with ID starting with 373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.777478 4721 scope.go:117] "RemoveContainer" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.777742 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": container with ID starting with b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c not found: ID does not exist" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.777780 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} err="failed to get container status \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": rpc error: code = NotFound desc = could not find container \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": container with ID starting with b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.777808 4721 scope.go:117] "RemoveContainer" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.778059 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": container with ID starting with 7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794 not found: ID does not exist" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.778141 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} err="failed to get container status \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": rpc error: code = NotFound desc = could not find container \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": container with ID starting with 7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.778159 4721 scope.go:117] "RemoveContainer" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.779333 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": container with ID starting with 989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41 not found: ID does not exist" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.779393 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} err="failed to get container status \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": rpc error: code = NotFound desc = could not find container \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": container with ID starting with 989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.779415 4721 scope.go:117] "RemoveContainer" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.779692 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": container with ID starting with 9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908 not found: ID does not exist" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.779712 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} err="failed to get container status \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": rpc error: code = NotFound desc = could not find container \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": container with ID starting with 9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.779728 4721 scope.go:117] "RemoveContainer" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.779952 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": container with ID starting with 11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46 not found: ID does not exist" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.779976 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} err="failed to get container status \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": rpc error: code = NotFound desc = could not find container \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": container with ID starting with 11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.780000 4721 scope.go:117] "RemoveContainer" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.780220 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": container with ID starting with 44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c not found: ID does not exist" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.780245 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} err="failed to get container status \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": rpc error: code = NotFound desc = could not find container \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": container with ID starting with 44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.780262 4721 scope.go:117] "RemoveContainer" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" Jan 28 18:45:34 crc kubenswrapper[4721]: E0128 18:45:34.784050 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": container with ID starting with 733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477 not found: ID does not exist" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.784087 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} err="failed to get container status \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": rpc error: code = NotFound desc = could not find container \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": container with ID starting with 733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.784110 4721 scope.go:117] "RemoveContainer" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.785336 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} err="failed to get container status \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": rpc error: code = NotFound desc = could not find container \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": container with ID starting with 7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.785363 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.785667 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} err="failed to get container status \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": rpc error: code = NotFound desc = could not find container \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": container with ID starting with 693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.785712 4721 scope.go:117] "RemoveContainer" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.786050 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} err="failed to get container status \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": rpc error: code = NotFound desc = could not find container \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": container with ID starting with 373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.786104 4721 scope.go:117] "RemoveContainer" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.786491 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} err="failed to get container status \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": rpc error: code = NotFound desc = could not find container \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": container with ID starting with b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.786536 4721 scope.go:117] "RemoveContainer" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.786781 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} err="failed to get container status \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": rpc error: code = NotFound desc = could not find container \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": container with ID starting with 7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.786814 4721 scope.go:117] "RemoveContainer" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787094 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} err="failed to get container status \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": rpc error: code = NotFound desc = could not find container \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": container with ID starting with 989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787119 4721 scope.go:117] "RemoveContainer" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787387 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} err="failed to get container status \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": rpc error: code = NotFound desc = could not find container \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": container with ID starting with 9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787412 4721 scope.go:117] "RemoveContainer" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787649 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} err="failed to get container status \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": rpc error: code = NotFound desc = could not find container \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": container with ID starting with 11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787677 4721 scope.go:117] "RemoveContainer" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787908 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} err="failed to get container status \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": rpc error: code = NotFound desc = could not find container \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": container with ID starting with 44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.787953 4721 scope.go:117] "RemoveContainer" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.788208 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} err="failed to get container status \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": rpc error: code = NotFound desc = could not find container \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": container with ID starting with 733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.788230 4721 scope.go:117] "RemoveContainer" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.788432 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} err="failed to get container status \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": rpc error: code = NotFound desc = could not find container \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": container with ID starting with 7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.788481 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.788713 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} err="failed to get container status \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": rpc error: code = NotFound desc = could not find container \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": container with ID starting with 693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.788744 4721 scope.go:117] "RemoveContainer" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.789094 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} err="failed to get container status \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": rpc error: code = NotFound desc = could not find container \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": container with ID starting with 373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.789122 4721 scope.go:117] "RemoveContainer" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.789406 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} err="failed to get container status \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": rpc error: code = NotFound desc = could not find container \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": container with ID starting with b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.789441 4721 scope.go:117] "RemoveContainer" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.789805 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} err="failed to get container status \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": rpc error: code = NotFound desc = could not find container \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": container with ID starting with 7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.789835 4721 scope.go:117] "RemoveContainer" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.790348 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} err="failed to get container status \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": rpc error: code = NotFound desc = could not find container \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": container with ID starting with 989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.790374 4721 scope.go:117] "RemoveContainer" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.790678 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} err="failed to get container status \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": rpc error: code = NotFound desc = could not find container \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": container with ID starting with 9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.790704 4721 scope.go:117] "RemoveContainer" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.790966 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} err="failed to get container status \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": rpc error: code = NotFound desc = could not find container \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": container with ID starting with 11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.790992 4721 scope.go:117] "RemoveContainer" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.791317 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} err="failed to get container status \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": rpc error: code = NotFound desc = could not find container \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": container with ID starting with 44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.791342 4721 scope.go:117] "RemoveContainer" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.797281 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} err="failed to get container status \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": rpc error: code = NotFound desc = could not find container \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": container with ID starting with 733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.797322 4721 scope.go:117] "RemoveContainer" containerID="7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.798700 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16"} err="failed to get container status \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": rpc error: code = NotFound desc = could not find container \"7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16\": container with ID starting with 7a26f6b8dd079402f04c84abf939388f7afb0bf78f0495cd4dfb46df9a301b16 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.798730 4721 scope.go:117] "RemoveContainer" containerID="693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.799003 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11"} err="failed to get container status \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": rpc error: code = NotFound desc = could not find container \"693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11\": container with ID starting with 693b094ac66f7858f7020708df804e9e12fa5a8c510841171e65ac69cb6a0e11 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.799036 4721 scope.go:117] "RemoveContainer" containerID="373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.800032 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2"} err="failed to get container status \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": rpc error: code = NotFound desc = could not find container \"373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2\": container with ID starting with 373ab38644697d3dbfe782ed66982ac1d5510d0a3bc39072c8113f71cc9e97f2 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.800054 4721 scope.go:117] "RemoveContainer" containerID="b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.801667 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c"} err="failed to get container status \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": rpc error: code = NotFound desc = could not find container \"b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c\": container with ID starting with b2cc9ef8160c40e72ae736136ef051b06552caea2539fad7969c5f923eba0c2c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.801728 4721 scope.go:117] "RemoveContainer" containerID="7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.807811 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794"} err="failed to get container status \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": rpc error: code = NotFound desc = could not find container \"7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794\": container with ID starting with 7df50ebf5999c8b004f27f801fc166377523c7b42171fec957ab82c88b529794 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.807872 4721 scope.go:117] "RemoveContainer" containerID="989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.813380 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41"} err="failed to get container status \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": rpc error: code = NotFound desc = could not find container \"989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41\": container with ID starting with 989948e16979a2cd8568ed93334056da9feb5bd7226f4124428dcffc09f13d41 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.813439 4721 scope.go:117] "RemoveContainer" containerID="9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.817716 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908"} err="failed to get container status \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": rpc error: code = NotFound desc = could not find container \"9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908\": container with ID starting with 9ac8a8ea8e677c233fbcb1ac69446f0251b2e390e84a39cd77c7ccf1a2041908 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.817775 4721 scope.go:117] "RemoveContainer" containerID="11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.821726 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46"} err="failed to get container status \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": rpc error: code = NotFound desc = could not find container \"11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46\": container with ID starting with 11610a0739792fa0417c894ad5b1c0d46d7188b585f8f4c1a1e7e66cd6d8bd46 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.821771 4721 scope.go:117] "RemoveContainer" containerID="44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.822826 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c"} err="failed to get container status \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": rpc error: code = NotFound desc = could not find container \"44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c\": container with ID starting with 44279a74a34d26b6eee05ac26d51a21492dddab4288ca5e09665c191cbacd90c not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.822877 4721 scope.go:117] "RemoveContainer" containerID="733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.823151 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477"} err="failed to get container status \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": rpc error: code = NotFound desc = could not find container \"733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477\": container with ID starting with 733c81890010402d252a1945284a85a3278b603ede433530865f680a1be02477 not found: ID does not exist" Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.849940 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr282"] Jan 28 18:45:34 crc kubenswrapper[4721]: I0128 18:45:34.855637 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr282"] Jan 28 18:45:35 crc kubenswrapper[4721]: I0128 18:45:35.506494 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/2.log" Jan 28 18:45:35 crc kubenswrapper[4721]: I0128 18:45:35.512146 4721 generic.go:334] "Generic (PLEG): container finished" podID="80833028-6365-4e88-80dc-98b4bcd4dbe6" containerID="9163e7da36ba163751a4e13e373dfc903b7d73a71f1614cdf6bde024b7378e47" exitCode=0 Jan 28 18:45:35 crc kubenswrapper[4721]: I0128 18:45:35.512233 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerDied","Data":"9163e7da36ba163751a4e13e373dfc903b7d73a71f1614cdf6bde024b7378e47"} Jan 28 18:45:35 crc kubenswrapper[4721]: I0128 18:45:35.512302 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"f194c116117de56cf5bbcfa2d859382b43ee46735f8efef4ebdbe6a3d251e5a1"} Jan 28 18:45:35 crc kubenswrapper[4721]: I0128 18:45:35.555047 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70686e42-b434-4ff9-9753-cfc870beef82" path="/var/lib/kubelet/pods/70686e42-b434-4ff9-9753-cfc870beef82/volumes" Jan 28 18:45:36 crc kubenswrapper[4721]: I0128 18:45:36.522953 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"c308625778751e594c227580cfb82d3f8402265ca1ee3af86c7981554ab5138a"} Jan 28 18:45:36 crc kubenswrapper[4721]: I0128 18:45:36.523966 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"0ca85079d1eafd1c2cd8fc4ee81de0d4496f76fb4fea10d4d1ba52f772928dd0"} Jan 28 18:45:36 crc kubenswrapper[4721]: I0128 18:45:36.523987 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"9811cc06e30392eb611d273050338ec7ac17a70e8141abdaf83b3345170798f5"} Jan 28 18:45:36 crc kubenswrapper[4721]: I0128 18:45:36.524002 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"4ff7d5d29ef248cebadb321a438ebee85efd9599c5ad3e27cfa688fa71efc2b7"} Jan 28 18:45:36 crc kubenswrapper[4721]: I0128 18:45:36.524015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"68f65952c3a451e3b30ab3ff7ec60823b4efa071c3dc620e3344096000463349"} Jan 28 18:45:37 crc kubenswrapper[4721]: I0128 18:45:37.539846 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"3f6aeb083b37fc561b1a3a7844301d0b2a516bce6a727b16974ba6c9a4c2f347"} Jan 28 18:45:39 crc kubenswrapper[4721]: I0128 18:45:39.549022 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"5178f507de0618027e16669f84788a6d49cbdf13d9e2aa48886e284c2cc34a02"} Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.655598 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-424xn"] Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.656244 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.659393 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.660038 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.669944 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4q64r" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.788476 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9"] Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.789210 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.797043 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.798383 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pz7rs" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.808595 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr"] Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.809387 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.826626 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxt6\" (UniqueName: \"kubernetes.io/projected/cd50289b-aa27-438d-89a2-405552dbadf7-kube-api-access-npxt6\") pod \"obo-prometheus-operator-68bc856cb9-424xn\" (UID: \"cd50289b-aa27-438d-89a2-405552dbadf7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.928209 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxt6\" (UniqueName: \"kubernetes.io/projected/cd50289b-aa27-438d-89a2-405552dbadf7-kube-api-access-npxt6\") pod \"obo-prometheus-operator-68bc856cb9-424xn\" (UID: \"cd50289b-aa27-438d-89a2-405552dbadf7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.928323 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3cb407f-4a19-4f81-b388-4db383b55701-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9\" (UID: \"e3cb407f-4a19-4f81-b388-4db383b55701\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.928361 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3cb407f-4a19-4f81-b388-4db383b55701-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9\" (UID: \"e3cb407f-4a19-4f81-b388-4db383b55701\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.928446 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b291a65-1dc7-4312-a429-60bb0a86800d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr\" (UID: \"8b291a65-1dc7-4312-a429-60bb0a86800d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.928616 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b291a65-1dc7-4312-a429-60bb0a86800d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr\" (UID: \"8b291a65-1dc7-4312-a429-60bb0a86800d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.957542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxt6\" (UniqueName: \"kubernetes.io/projected/cd50289b-aa27-438d-89a2-405552dbadf7-kube-api-access-npxt6\") pod \"obo-prometheus-operator-68bc856cb9-424xn\" (UID: \"cd50289b-aa27-438d-89a2-405552dbadf7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:40 crc kubenswrapper[4721]: I0128 18:45:40.973837 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.004907 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bdm2v"] Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.005860 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.008494 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(c07c76c8478fce82a0b29a9288728ed7e712e2985d7ad88ab12da5ace0fc32aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.008572 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(c07c76c8478fce82a0b29a9288728ed7e712e2985d7ad88ab12da5ace0fc32aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.008603 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(c07c76c8478fce82a0b29a9288728ed7e712e2985d7ad88ab12da5ace0fc32aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.008655 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-424xn_openshift-operators(cd50289b-aa27-438d-89a2-405552dbadf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-424xn_openshift-operators(cd50289b-aa27-438d-89a2-405552dbadf7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(c07c76c8478fce82a0b29a9288728ed7e712e2985d7ad88ab12da5ace0fc32aa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" podUID="cd50289b-aa27-438d-89a2-405552dbadf7" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.009425 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.009637 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-78968" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.029692 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3cb407f-4a19-4f81-b388-4db383b55701-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9\" (UID: \"e3cb407f-4a19-4f81-b388-4db383b55701\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.029740 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3cb407f-4a19-4f81-b388-4db383b55701-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9\" (UID: \"e3cb407f-4a19-4f81-b388-4db383b55701\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.029785 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b291a65-1dc7-4312-a429-60bb0a86800d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr\" (UID: \"8b291a65-1dc7-4312-a429-60bb0a86800d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.029806 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b291a65-1dc7-4312-a429-60bb0a86800d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr\" (UID: \"8b291a65-1dc7-4312-a429-60bb0a86800d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.039224 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b291a65-1dc7-4312-a429-60bb0a86800d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr\" (UID: \"8b291a65-1dc7-4312-a429-60bb0a86800d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.047770 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3cb407f-4a19-4f81-b388-4db383b55701-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9\" (UID: \"e3cb407f-4a19-4f81-b388-4db383b55701\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.048108 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b291a65-1dc7-4312-a429-60bb0a86800d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr\" (UID: \"8b291a65-1dc7-4312-a429-60bb0a86800d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.058776 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3cb407f-4a19-4f81-b388-4db383b55701-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9\" (UID: \"e3cb407f-4a19-4f81-b388-4db383b55701\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.103303 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.124088 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.131542 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(0c59f720f7626945cb13731bd172abcca5037d18bb8910fba747f05a11510940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.131608 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab955356-2884-4e1b-9dfc-966a662c4095-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bdm2v\" (UID: \"ab955356-2884-4e1b-9dfc-966a662c4095\") " pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.131710 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsf4s\" (UniqueName: \"kubernetes.io/projected/ab955356-2884-4e1b-9dfc-966a662c4095-kube-api-access-tsf4s\") pod \"observability-operator-59bdc8b94-bdm2v\" (UID: \"ab955356-2884-4e1b-9dfc-966a662c4095\") " pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.131622 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(0c59f720f7626945cb13731bd172abcca5037d18bb8910fba747f05a11510940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.131801 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(0c59f720f7626945cb13731bd172abcca5037d18bb8910fba747f05a11510940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.131887 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators(e3cb407f-4a19-4f81-b388-4db383b55701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators(e3cb407f-4a19-4f81-b388-4db383b55701)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(0c59f720f7626945cb13731bd172abcca5037d18bb8910fba747f05a11510940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" podUID="e3cb407f-4a19-4f81-b388-4db383b55701" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.167986 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(7f22532c7533bcbf29d03363ff2794a99f8acbf56f12aaa23a0292001c410847): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.168478 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(7f22532c7533bcbf29d03363ff2794a99f8acbf56f12aaa23a0292001c410847): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.168500 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(7f22532c7533bcbf29d03363ff2794a99f8acbf56f12aaa23a0292001c410847): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.168552 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators(8b291a65-1dc7-4312-a429-60bb0a86800d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators(8b291a65-1dc7-4312-a429-60bb0a86800d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(7f22532c7533bcbf29d03363ff2794a99f8acbf56f12aaa23a0292001c410847): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" podUID="8b291a65-1dc7-4312-a429-60bb0a86800d" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.218757 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fqs7q"] Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.219770 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.221791 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6jldp" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.233202 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab955356-2884-4e1b-9dfc-966a662c4095-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bdm2v\" (UID: \"ab955356-2884-4e1b-9dfc-966a662c4095\") " pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.233283 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsf4s\" (UniqueName: \"kubernetes.io/projected/ab955356-2884-4e1b-9dfc-966a662c4095-kube-api-access-tsf4s\") pod \"observability-operator-59bdc8b94-bdm2v\" (UID: \"ab955356-2884-4e1b-9dfc-966a662c4095\") " pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.237098 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab955356-2884-4e1b-9dfc-966a662c4095-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bdm2v\" (UID: \"ab955356-2884-4e1b-9dfc-966a662c4095\") " pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.269055 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsf4s\" (UniqueName: \"kubernetes.io/projected/ab955356-2884-4e1b-9dfc-966a662c4095-kube-api-access-tsf4s\") pod \"observability-operator-59bdc8b94-bdm2v\" (UID: \"ab955356-2884-4e1b-9dfc-966a662c4095\") " pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.335163 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fqs7q\" (UID: \"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117\") " pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.335291 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tv29\" (UniqueName: \"kubernetes.io/projected/ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117-kube-api-access-6tv29\") pod \"perses-operator-5bf474d74f-fqs7q\" (UID: \"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117\") " pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.398156 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.429831 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e0caf2e7ec3c013c22fce20980c566f8f4579315aa98e4495776ed18cf85aca4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.429916 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e0caf2e7ec3c013c22fce20980c566f8f4579315aa98e4495776ed18cf85aca4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.429945 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e0caf2e7ec3c013c22fce20980c566f8f4579315aa98e4495776ed18cf85aca4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.430005 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-bdm2v_openshift-operators(ab955356-2884-4e1b-9dfc-966a662c4095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-bdm2v_openshift-operators(ab955356-2884-4e1b-9dfc-966a662c4095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e0caf2e7ec3c013c22fce20980c566f8f4579315aa98e4495776ed18cf85aca4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" podUID="ab955356-2884-4e1b-9dfc-966a662c4095" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.436870 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fqs7q\" (UID: \"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117\") " pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.436956 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tv29\" (UniqueName: \"kubernetes.io/projected/ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117-kube-api-access-6tv29\") pod \"perses-operator-5bf474d74f-fqs7q\" (UID: \"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117\") " pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.438505 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117-openshift-service-ca\") pod \"perses-operator-5bf474d74f-fqs7q\" (UID: \"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117\") " pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.459794 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tv29\" (UniqueName: \"kubernetes.io/projected/ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117-kube-api-access-6tv29\") pod \"perses-operator-5bf474d74f-fqs7q\" (UID: \"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117\") " pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.535989 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.565523 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" event={"ID":"80833028-6365-4e88-80dc-98b4bcd4dbe6","Type":"ContainerStarted","Data":"19c205a378d26ad81d7419345219c555be745de85ccbf960a00c006eaa74e1fa"} Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.566779 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.566814 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.566856 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.567094 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(0cebff4d16c128d769883e98d98172c21d10c1cad0a7d23b648718c35ac057ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.567138 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(0cebff4d16c128d769883e98d98172c21d10c1cad0a7d23b648718c35ac057ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.567157 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(0cebff4d16c128d769883e98d98172c21d10c1cad0a7d23b648718c35ac057ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:41 crc kubenswrapper[4721]: E0128 18:45:41.567209 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-fqs7q_openshift-operators(ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-fqs7q_openshift-operators(ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(0cebff4d16c128d769883e98d98172c21d10c1cad0a7d23b648718c35ac057ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" podUID="ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.596486 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" podStartSLOduration=7.596464583 podStartE2EDuration="7.596464583s" podCreationTimestamp="2026-01-28 18:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:45:41.594829172 +0000 UTC m=+707.320134732" watchObservedRunningTime="2026-01-28 18:45:41.596464583 +0000 UTC m=+707.321770143" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.611842 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:41 crc kubenswrapper[4721]: I0128 18:45:41.612502 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.803280 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9"] Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.803739 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.804220 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.808917 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr"] Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.809203 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.809717 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.813100 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fqs7q"] Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.813321 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.814074 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.817009 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bdm2v"] Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.817143 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.817677 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.854070 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-424xn"] Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.854698 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:42 crc kubenswrapper[4721]: I0128 18:45:42.855622 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.869497 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(6f7406a2b80a77d098180facaa3650a6356b75d612e6c51fb97611181858040b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.869590 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(6f7406a2b80a77d098180facaa3650a6356b75d612e6c51fb97611181858040b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.869618 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(6f7406a2b80a77d098180facaa3650a6356b75d612e6c51fb97611181858040b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.869703 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators(8b291a65-1dc7-4312-a429-60bb0a86800d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators(8b291a65-1dc7-4312-a429-60bb0a86800d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(6f7406a2b80a77d098180facaa3650a6356b75d612e6c51fb97611181858040b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" podUID="8b291a65-1dc7-4312-a429-60bb0a86800d" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.875665 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(3f51358f8a962ffb9532dcb9f4264e618184ca7ff3143086d6fff809ffa18455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.875742 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(3f51358f8a962ffb9532dcb9f4264e618184ca7ff3143086d6fff809ffa18455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.875770 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(3f51358f8a962ffb9532dcb9f4264e618184ca7ff3143086d6fff809ffa18455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.875824 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators(e3cb407f-4a19-4f81-b388-4db383b55701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators(e3cb407f-4a19-4f81-b388-4db383b55701)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(3f51358f8a962ffb9532dcb9f4264e618184ca7ff3143086d6fff809ffa18455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" podUID="e3cb407f-4a19-4f81-b388-4db383b55701" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.897583 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(b16153d02b95108ff6c305db6d698a0f8f936f49d3c08da9ba3c07561f33247c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.897650 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(b16153d02b95108ff6c305db6d698a0f8f936f49d3c08da9ba3c07561f33247c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.897678 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(b16153d02b95108ff6c305db6d698a0f8f936f49d3c08da9ba3c07561f33247c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.897728 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-fqs7q_openshift-operators(ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-fqs7q_openshift-operators(ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(b16153d02b95108ff6c305db6d698a0f8f936f49d3c08da9ba3c07561f33247c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" podUID="ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.922839 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e95cf595fbe122c7fd1eac39a6d0ea86f54cc752cfcf03a2e4deca55c36a9ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.922911 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e95cf595fbe122c7fd1eac39a6d0ea86f54cc752cfcf03a2e4deca55c36a9ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.922933 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e95cf595fbe122c7fd1eac39a6d0ea86f54cc752cfcf03a2e4deca55c36a9ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.922988 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-bdm2v_openshift-operators(ab955356-2884-4e1b-9dfc-966a662c4095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-bdm2v_openshift-operators(ab955356-2884-4e1b-9dfc-966a662c4095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(e95cf595fbe122c7fd1eac39a6d0ea86f54cc752cfcf03a2e4deca55c36a9ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" podUID="ab955356-2884-4e1b-9dfc-966a662c4095" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.937967 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(7fd7dbd7fa408845fbfecd7bdb3712214efde1f8ee3766812dff74d9a275217d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.938087 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(7fd7dbd7fa408845fbfecd7bdb3712214efde1f8ee3766812dff74d9a275217d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.938122 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(7fd7dbd7fa408845fbfecd7bdb3712214efde1f8ee3766812dff74d9a275217d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:42 crc kubenswrapper[4721]: E0128 18:45:42.938226 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-424xn_openshift-operators(cd50289b-aa27-438d-89a2-405552dbadf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-424xn_openshift-operators(cd50289b-aa27-438d-89a2-405552dbadf7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(7fd7dbd7fa408845fbfecd7bdb3712214efde1f8ee3766812dff74d9a275217d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" podUID="cd50289b-aa27-438d-89a2-405552dbadf7" Jan 28 18:45:45 crc kubenswrapper[4721]: I0128 18:45:45.539062 4721 scope.go:117] "RemoveContainer" containerID="09078904e276a9f5eb4aafabbe371ff67e22dd1b352aa67825ea2de56709d503" Jan 28 18:45:45 crc kubenswrapper[4721]: E0128 18:45:45.539658 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rgqdt_openshift-multus(c0a22020-3f34-4895-beec-2ed5d829ea79)\"" pod="openshift-multus/multus-rgqdt" podUID="c0a22020-3f34-4895-beec-2ed5d829ea79" Jan 28 18:45:52 crc kubenswrapper[4721]: I0128 18:45:52.388281 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" podUID="6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" containerName="registry" containerID="cri-o://1033192df353e832de0c4ee8fdcdffd87f44695d410cae5349bf010ba6768cff" gracePeriod=30 Jan 28 18:45:52 crc kubenswrapper[4721]: I0128 18:45:52.619419 4721 generic.go:334] "Generic (PLEG): container finished" podID="6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" containerID="1033192df353e832de0c4ee8fdcdffd87f44695d410cae5349bf010ba6768cff" exitCode=0 Jan 28 18:45:52 crc kubenswrapper[4721]: I0128 18:45:52.619467 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" event={"ID":"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc","Type":"ContainerDied","Data":"1033192df353e832de0c4ee8fdcdffd87f44695d410cae5349bf010ba6768cff"} Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.094815 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.203746 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq6zs\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-kube-api-access-kq6zs\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.203840 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-bound-sa-token\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.203899 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-ca-trust-extracted\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.203937 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-tls\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.204080 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.204156 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-trusted-ca\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.205051 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-certificates\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.205538 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.205889 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.205918 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-installation-pull-secrets\") pod \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\" (UID: \"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc\") " Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.206806 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.207028 4721 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.209935 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-kube-api-access-kq6zs" (OuterVolumeSpecName: "kube-api-access-kq6zs") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "kube-api-access-kq6zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.213947 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.217905 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.220369 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.222267 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.234798 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" (UID: "6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.309625 4721 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.309666 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq6zs\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-kube-api-access-kq6zs\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.309711 4721 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.309722 4721 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.309731 4721 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.626991 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" event={"ID":"6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc","Type":"ContainerDied","Data":"1e5191bf91b999db32044005770d0297159cb8d6ad09dc038d9377e841fc49d0"} Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.627059 4721 scope.go:117] "RemoveContainer" containerID="1033192df353e832de0c4ee8fdcdffd87f44695d410cae5349bf010ba6768cff" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.627063 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-b42n2" Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.654938 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b42n2"] Jan 28 18:45:53 crc kubenswrapper[4721]: I0128 18:45:53.658136 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-b42n2"] Jan 28 18:45:54 crc kubenswrapper[4721]: I0128 18:45:54.528271 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:54 crc kubenswrapper[4721]: I0128 18:45:54.528331 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:54 crc kubenswrapper[4721]: I0128 18:45:54.528294 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:54 crc kubenswrapper[4721]: I0128 18:45:54.528641 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:54 crc kubenswrapper[4721]: I0128 18:45:54.528795 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:54 crc kubenswrapper[4721]: I0128 18:45:54.528853 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.575213 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(251bfcaacc61868105d1d4ba5cd4b1e45667184818fb44a8d87623d2523d8bfc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.575627 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(251bfcaacc61868105d1d4ba5cd4b1e45667184818fb44a8d87623d2523d8bfc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.575652 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(251bfcaacc61868105d1d4ba5cd4b1e45667184818fb44a8d87623d2523d8bfc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.575703 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-fqs7q_openshift-operators(ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-fqs7q_openshift-operators(ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-fqs7q_openshift-operators_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117_0(251bfcaacc61868105d1d4ba5cd4b1e45667184818fb44a8d87623d2523d8bfc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" podUID="ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.584409 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(c42accbf144c65c66477cc97b5e540eb96110330209f0c8b0c1c122fddc4a8b4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.584469 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(c42accbf144c65c66477cc97b5e540eb96110330209f0c8b0c1c122fddc4a8b4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.584487 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(c42accbf144c65c66477cc97b5e540eb96110330209f0c8b0c1c122fddc4a8b4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.584534 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators(8b291a65-1dc7-4312-a429-60bb0a86800d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators(8b291a65-1dc7-4312-a429-60bb0a86800d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_openshift-operators_8b291a65-1dc7-4312-a429-60bb0a86800d_0(c42accbf144c65c66477cc97b5e540eb96110330209f0c8b0c1c122fddc4a8b4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" podUID="8b291a65-1dc7-4312-a429-60bb0a86800d" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.593869 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(447ac267283fca0c462f2ebbf6526196f69b6c820a8410a3c45e838c3b1b6bc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.593929 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(447ac267283fca0c462f2ebbf6526196f69b6c820a8410a3c45e838c3b1b6bc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.593962 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(447ac267283fca0c462f2ebbf6526196f69b6c820a8410a3c45e838c3b1b6bc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:45:54 crc kubenswrapper[4721]: E0128 18:45:54.594008 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators(e3cb407f-4a19-4f81-b388-4db383b55701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators(e3cb407f-4a19-4f81-b388-4db383b55701)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_openshift-operators_e3cb407f-4a19-4f81-b388-4db383b55701_0(447ac267283fca0c462f2ebbf6526196f69b6c820a8410a3c45e838c3b1b6bc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" podUID="e3cb407f-4a19-4f81-b388-4db383b55701" Jan 28 18:45:55 crc kubenswrapper[4721]: I0128 18:45:55.545974 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" path="/var/lib/kubelet/pods/6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc/volumes" Jan 28 18:45:56 crc kubenswrapper[4721]: I0128 18:45:56.528531 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:56 crc kubenswrapper[4721]: I0128 18:45:56.529399 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:56 crc kubenswrapper[4721]: E0128 18:45:56.551590 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(a7fb8b16a4c6b418b46d72bb631037986d446c2229e4ece9df9a502012fa70ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:56 crc kubenswrapper[4721]: E0128 18:45:56.551677 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(a7fb8b16a4c6b418b46d72bb631037986d446c2229e4ece9df9a502012fa70ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:56 crc kubenswrapper[4721]: E0128 18:45:56.551700 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(a7fb8b16a4c6b418b46d72bb631037986d446c2229e4ece9df9a502012fa70ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:45:56 crc kubenswrapper[4721]: E0128 18:45:56.551800 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-bdm2v_openshift-operators(ab955356-2884-4e1b-9dfc-966a662c4095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-bdm2v_openshift-operators(ab955356-2884-4e1b-9dfc-966a662c4095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-bdm2v_openshift-operators_ab955356-2884-4e1b-9dfc-966a662c4095_0(a7fb8b16a4c6b418b46d72bb631037986d446c2229e4ece9df9a502012fa70ce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" podUID="ab955356-2884-4e1b-9dfc-966a662c4095" Jan 28 18:45:57 crc kubenswrapper[4721]: I0128 18:45:57.527842 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:57 crc kubenswrapper[4721]: I0128 18:45:57.528855 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:57 crc kubenswrapper[4721]: E0128 18:45:57.553282 4721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(d76f5c030e1d37cb2f1c5a0388fe46a0ff5a473077746ab8ba16cad0dc4f3857): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:45:57 crc kubenswrapper[4721]: E0128 18:45:57.553663 4721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(d76f5c030e1d37cb2f1c5a0388fe46a0ff5a473077746ab8ba16cad0dc4f3857): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:57 crc kubenswrapper[4721]: E0128 18:45:57.553692 4721 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(d76f5c030e1d37cb2f1c5a0388fe46a0ff5a473077746ab8ba16cad0dc4f3857): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:45:57 crc kubenswrapper[4721]: E0128 18:45:57.553759 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-424xn_openshift-operators(cd50289b-aa27-438d-89a2-405552dbadf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-424xn_openshift-operators(cd50289b-aa27-438d-89a2-405552dbadf7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-424xn_openshift-operators_cd50289b-aa27-438d-89a2-405552dbadf7_0(d76f5c030e1d37cb2f1c5a0388fe46a0ff5a473077746ab8ba16cad0dc4f3857): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" podUID="cd50289b-aa27-438d-89a2-405552dbadf7" Jan 28 18:45:59 crc kubenswrapper[4721]: I0128 18:45:59.528495 4721 scope.go:117] "RemoveContainer" containerID="09078904e276a9f5eb4aafabbe371ff67e22dd1b352aa67825ea2de56709d503" Jan 28 18:46:00 crc kubenswrapper[4721]: I0128 18:46:00.671478 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rgqdt_c0a22020-3f34-4895-beec-2ed5d829ea79/kube-multus/2.log" Jan 28 18:46:00 crc kubenswrapper[4721]: I0128 18:46:00.671806 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rgqdt" event={"ID":"c0a22020-3f34-4895-beec-2ed5d829ea79","Type":"ContainerStarted","Data":"dcc7599df8cdab85bd72fb6e78de16052dec0318864900cbef0d9d7797e3d030"} Jan 28 18:46:01 crc kubenswrapper[4721]: I0128 18:46:01.225414 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:46:01 crc kubenswrapper[4721]: I0128 18:46:01.225489 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:46:04 crc kubenswrapper[4721]: I0128 18:46:04.764010 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vhgbz" Jan 28 18:46:05 crc kubenswrapper[4721]: I0128 18:46:05.528437 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:46:05 crc kubenswrapper[4721]: I0128 18:46:05.531821 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:46:05 crc kubenswrapper[4721]: I0128 18:46:05.802104 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-fqs7q"] Jan 28 18:46:06 crc kubenswrapper[4721]: I0128 18:46:06.528394 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:46:06 crc kubenswrapper[4721]: I0128 18:46:06.528979 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" Jan 28 18:46:06 crc kubenswrapper[4721]: I0128 18:46:06.700541 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" event={"ID":"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117","Type":"ContainerStarted","Data":"af4fdc7f10896013ad4e50b43dc877343d41dcdcf806c7f6e7fc06d536876d93"} Jan 28 18:46:06 crc kubenswrapper[4721]: I0128 18:46:06.912460 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9"] Jan 28 18:46:06 crc kubenswrapper[4721]: W0128 18:46:06.917825 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3cb407f_4a19_4f81_b388_4db383b55701.slice/crio-65a106fb03f0deab1b5a5d602079bed33a61a836ed4954af7bfbd4908ac6e754 WatchSource:0}: Error finding container 65a106fb03f0deab1b5a5d602079bed33a61a836ed4954af7bfbd4908ac6e754: Status 404 returned error can't find the container with id 65a106fb03f0deab1b5a5d602079bed33a61a836ed4954af7bfbd4908ac6e754 Jan 28 18:46:07 crc kubenswrapper[4721]: I0128 18:46:07.707254 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" event={"ID":"e3cb407f-4a19-4f81-b388-4db383b55701","Type":"ContainerStarted","Data":"65a106fb03f0deab1b5a5d602079bed33a61a836ed4954af7bfbd4908ac6e754"} Jan 28 18:46:09 crc kubenswrapper[4721]: I0128 18:46:09.528809 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:46:09 crc kubenswrapper[4721]: I0128 18:46:09.529572 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" Jan 28 18:46:10 crc kubenswrapper[4721]: I0128 18:46:10.528588 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:46:10 crc kubenswrapper[4721]: I0128 18:46:10.529027 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.299402 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr"] Jan 28 18:46:11 crc kubenswrapper[4721]: W0128 18:46:11.320241 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b291a65_1dc7_4312_a429_60bb0a86800d.slice/crio-88f9fe6ca864f20d13c6c7879ba38e1c7713c98e9bfde5ca889213dfe00e1803 WatchSource:0}: Error finding container 88f9fe6ca864f20d13c6c7879ba38e1c7713c98e9bfde5ca889213dfe00e1803: Status 404 returned error can't find the container with id 88f9fe6ca864f20d13c6c7879ba38e1c7713c98e9bfde5ca889213dfe00e1803 Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.396402 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-424xn"] Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.528717 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.529290 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.757327 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" event={"ID":"e3cb407f-4a19-4f81-b388-4db383b55701","Type":"ContainerStarted","Data":"a112188a760b36dad1a9f46240d9247a4619672804106456ae6cf92954d70b57"} Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.772284 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bdm2v"] Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.772315 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" event={"ID":"ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117","Type":"ContainerStarted","Data":"bbd1d9451ed5a8cc705400e244c7b0dcb1df9f77d806d7148f544f6a4eb9ec0c"} Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.772340 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.772350 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" event={"ID":"8b291a65-1dc7-4312-a429-60bb0a86800d","Type":"ContainerStarted","Data":"bef000c754d993698c6f9d71819b02f25c058d1bbd4103cb232a6ca8783f3d2b"} Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.772360 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" event={"ID":"8b291a65-1dc7-4312-a429-60bb0a86800d","Type":"ContainerStarted","Data":"88f9fe6ca864f20d13c6c7879ba38e1c7713c98e9bfde5ca889213dfe00e1803"} Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.772369 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" event={"ID":"cd50289b-aa27-438d-89a2-405552dbadf7","Type":"ContainerStarted","Data":"e95e8359c5bfef3bfe88838ff3355a3a174e4bed5b7d4d2a446e7e94b989aa66"} Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.818995 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9" podStartSLOduration=27.821731046 podStartE2EDuration="31.818976376s" podCreationTimestamp="2026-01-28 18:45:40 +0000 UTC" firstStartedPulling="2026-01-28 18:46:06.919994575 +0000 UTC m=+732.645300135" lastFinishedPulling="2026-01-28 18:46:10.917239905 +0000 UTC m=+736.642545465" observedRunningTime="2026-01-28 18:46:11.797074688 +0000 UTC m=+737.522380248" watchObservedRunningTime="2026-01-28 18:46:11.818976376 +0000 UTC m=+737.544281936" Jan 28 18:46:11 crc kubenswrapper[4721]: I0128 18:46:11.820429 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" podStartSLOduration=25.715756411 podStartE2EDuration="30.820419831s" podCreationTimestamp="2026-01-28 18:45:41 +0000 UTC" firstStartedPulling="2026-01-28 18:46:05.81147975 +0000 UTC m=+731.536785310" lastFinishedPulling="2026-01-28 18:46:10.91614318 +0000 UTC m=+736.641448730" observedRunningTime="2026-01-28 18:46:11.816944933 +0000 UTC m=+737.542250493" watchObservedRunningTime="2026-01-28 18:46:11.820419831 +0000 UTC m=+737.545725391" Jan 28 18:46:12 crc kubenswrapper[4721]: I0128 18:46:12.785733 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" event={"ID":"ab955356-2884-4e1b-9dfc-966a662c4095","Type":"ContainerStarted","Data":"ea64ea84a60c31c2749dcc3c2686b5833ef4484ada9983775de72acbe9f01e02"} Jan 28 18:46:14 crc kubenswrapper[4721]: I0128 18:46:14.807529 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" event={"ID":"cd50289b-aa27-438d-89a2-405552dbadf7","Type":"ContainerStarted","Data":"fcbe4155b5c4fbd830cc75790cc04c7fdba1cad5aa3f3a0fd9c0bba50ba3235b"} Jan 28 18:46:14 crc kubenswrapper[4721]: I0128 18:46:14.832358 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr" podStartSLOduration=34.831012755 podStartE2EDuration="34.831012755s" podCreationTimestamp="2026-01-28 18:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:46:11.844442196 +0000 UTC m=+737.569747756" watchObservedRunningTime="2026-01-28 18:46:14.831012755 +0000 UTC m=+740.556318315" Jan 28 18:46:14 crc kubenswrapper[4721]: I0128 18:46:14.835094 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-424xn" podStartSLOduration=31.883231043 podStartE2EDuration="34.835072673s" podCreationTimestamp="2026-01-28 18:45:40 +0000 UTC" firstStartedPulling="2026-01-28 18:46:11.404494508 +0000 UTC m=+737.129800068" lastFinishedPulling="2026-01-28 18:46:14.356336138 +0000 UTC m=+740.081641698" observedRunningTime="2026-01-28 18:46:14.827137103 +0000 UTC m=+740.552442663" watchObservedRunningTime="2026-01-28 18:46:14.835072673 +0000 UTC m=+740.560378233" Jan 28 18:46:18 crc kubenswrapper[4721]: I0128 18:46:18.834207 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" event={"ID":"ab955356-2884-4e1b-9dfc-966a662c4095","Type":"ContainerStarted","Data":"8b0ab5a554ca6ffe9e7f71ca1cf7243baf841883e0fb5dda728a9b053ec7d272"} Jan 28 18:46:18 crc kubenswrapper[4721]: I0128 18:46:18.834634 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:46:18 crc kubenswrapper[4721]: I0128 18:46:18.837163 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" Jan 28 18:46:18 crc kubenswrapper[4721]: I0128 18:46:18.859225 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-bdm2v" podStartSLOduration=32.915076771 podStartE2EDuration="38.859207337s" podCreationTimestamp="2026-01-28 18:45:40 +0000 UTC" firstStartedPulling="2026-01-28 18:46:11.777683379 +0000 UTC m=+737.502988939" lastFinishedPulling="2026-01-28 18:46:17.721813945 +0000 UTC m=+743.447119505" observedRunningTime="2026-01-28 18:46:18.852852208 +0000 UTC m=+744.578157778" watchObservedRunningTime="2026-01-28 18:46:18.859207337 +0000 UTC m=+744.584512897" Jan 28 18:46:21 crc kubenswrapper[4721]: I0128 18:46:21.540080 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-fqs7q" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.138479 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f66kh"] Jan 28 18:46:27 crc kubenswrapper[4721]: E0128 18:46:27.139398 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" containerName="registry" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.139436 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" containerName="registry" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.139585 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3bbd44-e901-4e2a-b48c-f1e1ddb965dc" containerName="registry" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.140090 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.148329 4721 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-n62kl" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.148560 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.153383 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.158707 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f66kh"] Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.164498 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xxzt6"] Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.165450 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xxzt6" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.167316 4721 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2247d" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.181330 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xxzt6"] Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.195025 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l5dj9"] Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.197010 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.200854 4721 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c9h48" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.222475 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l5dj9"] Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.301259 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wb7\" (UniqueName: \"kubernetes.io/projected/f637f152-a40b-45ff-989f-f82ad65b2066-kube-api-access-96wb7\") pod \"cert-manager-cainjector-cf98fcc89-f66kh\" (UID: \"f637f152-a40b-45ff-989f-f82ad65b2066\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.301307 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g74s4\" (UniqueName: \"kubernetes.io/projected/12d309c4-9049-41c8-be1f-8f0e422ab186-kube-api-access-g74s4\") pod \"cert-manager-webhook-687f57d79b-l5dj9\" (UID: \"12d309c4-9049-41c8-be1f-8f0e422ab186\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.301429 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrqsk\" (UniqueName: \"kubernetes.io/projected/c68c41d8-39c1-417b-a4ba-dafeb3762c32-kube-api-access-hrqsk\") pod \"cert-manager-858654f9db-xxzt6\" (UID: \"c68c41d8-39c1-417b-a4ba-dafeb3762c32\") " pod="cert-manager/cert-manager-858654f9db-xxzt6" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.403193 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wb7\" (UniqueName: \"kubernetes.io/projected/f637f152-a40b-45ff-989f-f82ad65b2066-kube-api-access-96wb7\") pod \"cert-manager-cainjector-cf98fcc89-f66kh\" (UID: \"f637f152-a40b-45ff-989f-f82ad65b2066\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.403587 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g74s4\" (UniqueName: \"kubernetes.io/projected/12d309c4-9049-41c8-be1f-8f0e422ab186-kube-api-access-g74s4\") pod \"cert-manager-webhook-687f57d79b-l5dj9\" (UID: \"12d309c4-9049-41c8-be1f-8f0e422ab186\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.403772 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrqsk\" (UniqueName: \"kubernetes.io/projected/c68c41d8-39c1-417b-a4ba-dafeb3762c32-kube-api-access-hrqsk\") pod \"cert-manager-858654f9db-xxzt6\" (UID: \"c68c41d8-39c1-417b-a4ba-dafeb3762c32\") " pod="cert-manager/cert-manager-858654f9db-xxzt6" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.422603 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wb7\" (UniqueName: \"kubernetes.io/projected/f637f152-a40b-45ff-989f-f82ad65b2066-kube-api-access-96wb7\") pod \"cert-manager-cainjector-cf98fcc89-f66kh\" (UID: \"f637f152-a40b-45ff-989f-f82ad65b2066\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.423649 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrqsk\" (UniqueName: \"kubernetes.io/projected/c68c41d8-39c1-417b-a4ba-dafeb3762c32-kube-api-access-hrqsk\") pod \"cert-manager-858654f9db-xxzt6\" (UID: \"c68c41d8-39c1-417b-a4ba-dafeb3762c32\") " pod="cert-manager/cert-manager-858654f9db-xxzt6" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.425740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g74s4\" (UniqueName: \"kubernetes.io/projected/12d309c4-9049-41c8-be1f-8f0e422ab186-kube-api-access-g74s4\") pod \"cert-manager-webhook-687f57d79b-l5dj9\" (UID: \"12d309c4-9049-41c8-be1f-8f0e422ab186\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.465699 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.480783 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xxzt6" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.526769 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.771075 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f66kh"] Jan 28 18:46:27 crc kubenswrapper[4721]: W0128 18:46:27.779758 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf637f152_a40b_45ff_989f_f82ad65b2066.slice/crio-d772fa689d68fff8d6757896db93c5466b69ccfa7eaa1954bdcebd9f26b656e5 WatchSource:0}: Error finding container d772fa689d68fff8d6757896db93c5466b69ccfa7eaa1954bdcebd9f26b656e5: Status 404 returned error can't find the container with id d772fa689d68fff8d6757896db93c5466b69ccfa7eaa1954bdcebd9f26b656e5 Jan 28 18:46:27 crc kubenswrapper[4721]: I0128 18:46:27.880983 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" event={"ID":"f637f152-a40b-45ff-989f-f82ad65b2066","Type":"ContainerStarted","Data":"d772fa689d68fff8d6757896db93c5466b69ccfa7eaa1954bdcebd9f26b656e5"} Jan 28 18:46:28 crc kubenswrapper[4721]: I0128 18:46:28.045616 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l5dj9"] Jan 28 18:46:28 crc kubenswrapper[4721]: W0128 18:46:28.049530 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12d309c4_9049_41c8_be1f_8f0e422ab186.slice/crio-1b4ee88134ac06deeedc9cc51e75be539845bacf7fcdaa7f0c6c7386341ff9c9 WatchSource:0}: Error finding container 1b4ee88134ac06deeedc9cc51e75be539845bacf7fcdaa7f0c6c7386341ff9c9: Status 404 returned error can't find the container with id 1b4ee88134ac06deeedc9cc51e75be539845bacf7fcdaa7f0c6c7386341ff9c9 Jan 28 18:46:28 crc kubenswrapper[4721]: I0128 18:46:28.072500 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xxzt6"] Jan 28 18:46:28 crc kubenswrapper[4721]: I0128 18:46:28.903451 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xxzt6" event={"ID":"c68c41d8-39c1-417b-a4ba-dafeb3762c32","Type":"ContainerStarted","Data":"8b333408f5688bd2474d446d24031cb10b85edf122473d4a3de29674caeff8cb"} Jan 28 18:46:28 crc kubenswrapper[4721]: I0128 18:46:28.904631 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" event={"ID":"12d309c4-9049-41c8-be1f-8f0e422ab186","Type":"ContainerStarted","Data":"1b4ee88134ac06deeedc9cc51e75be539845bacf7fcdaa7f0c6c7386341ff9c9"} Jan 28 18:46:30 crc kubenswrapper[4721]: I0128 18:46:30.931572 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" event={"ID":"f637f152-a40b-45ff-989f-f82ad65b2066","Type":"ContainerStarted","Data":"b07ea0a97f0ebd1514d29ec6fdd9358223df50d31e26752939a813f0c1a63991"} Jan 28 18:46:30 crc kubenswrapper[4721]: I0128 18:46:30.951273 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f66kh" podStartSLOduration=1.662844108 podStartE2EDuration="3.95125384s" podCreationTimestamp="2026-01-28 18:46:27 +0000 UTC" firstStartedPulling="2026-01-28 18:46:27.784678538 +0000 UTC m=+753.509984098" lastFinishedPulling="2026-01-28 18:46:30.07308827 +0000 UTC m=+755.798393830" observedRunningTime="2026-01-28 18:46:30.949771493 +0000 UTC m=+756.675077063" watchObservedRunningTime="2026-01-28 18:46:30.95125384 +0000 UTC m=+756.676559400" Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.225019 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.225083 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.225129 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.225736 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d9cb44706b2f5923bc65487fc2d438c7475d17f3368442164e195f17c4693d2"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.225794 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://1d9cb44706b2f5923bc65487fc2d438c7475d17f3368442164e195f17c4693d2" gracePeriod=600 Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.938901 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="1d9cb44706b2f5923bc65487fc2d438c7475d17f3368442164e195f17c4693d2" exitCode=0 Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.938986 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"1d9cb44706b2f5923bc65487fc2d438c7475d17f3368442164e195f17c4693d2"} Jan 28 18:46:31 crc kubenswrapper[4721]: I0128 18:46:31.939067 4721 scope.go:117] "RemoveContainer" containerID="760f911b45297553f15fd5d7594848accfaf0eb2624491e25ca92b5519181df7" Jan 28 18:46:32 crc kubenswrapper[4721]: I0128 18:46:32.946709 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"05b5a08257768ab03feca7d9732c3a599d23c36babbadf35cb5007f36020b414"} Jan 28 18:46:32 crc kubenswrapper[4721]: I0128 18:46:32.949185 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xxzt6" event={"ID":"c68c41d8-39c1-417b-a4ba-dafeb3762c32","Type":"ContainerStarted","Data":"5efc1227eb587f7f7504acde02ca67bedc099cba4263a8db0e0434f84ecef239"} Jan 28 18:46:32 crc kubenswrapper[4721]: I0128 18:46:32.950837 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" event={"ID":"12d309c4-9049-41c8-be1f-8f0e422ab186","Type":"ContainerStarted","Data":"2ad814a967bf6b64ceaddb08df29c0084c26555507465b69006c087b9e3fc22e"} Jan 28 18:46:32 crc kubenswrapper[4721]: I0128 18:46:32.950997 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:33 crc kubenswrapper[4721]: I0128 18:46:33.017607 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" podStartSLOduration=1.838805565 podStartE2EDuration="6.017589677s" podCreationTimestamp="2026-01-28 18:46:27 +0000 UTC" firstStartedPulling="2026-01-28 18:46:28.052634634 +0000 UTC m=+753.777940194" lastFinishedPulling="2026-01-28 18:46:32.231418736 +0000 UTC m=+757.956724306" observedRunningTime="2026-01-28 18:46:33.01418111 +0000 UTC m=+758.739486670" watchObservedRunningTime="2026-01-28 18:46:33.017589677 +0000 UTC m=+758.742895247" Jan 28 18:46:37 crc kubenswrapper[4721]: I0128 18:46:37.535551 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-l5dj9" Jan 28 18:46:37 crc kubenswrapper[4721]: I0128 18:46:37.554819 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xxzt6" podStartSLOduration=6.319547201 podStartE2EDuration="10.554798016s" podCreationTimestamp="2026-01-28 18:46:27 +0000 UTC" firstStartedPulling="2026-01-28 18:46:28.079540599 +0000 UTC m=+753.804846159" lastFinishedPulling="2026-01-28 18:46:32.314791414 +0000 UTC m=+758.040096974" observedRunningTime="2026-01-28 18:46:33.039066501 +0000 UTC m=+758.764372061" watchObservedRunningTime="2026-01-28 18:46:37.554798016 +0000 UTC m=+763.280103586" Jan 28 18:46:46 crc kubenswrapper[4721]: I0128 18:46:46.450766 4721 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.645142 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc"] Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.648213 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.650950 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.662571 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc"] Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.700360 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.700467 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.700498 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7sgp\" (UniqueName: \"kubernetes.io/projected/28e082c3-f662-4caa-be33-4bf2cc234ca7-kube-api-access-m7sgp\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.802157 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.802237 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7sgp\" (UniqueName: \"kubernetes.io/projected/28e082c3-f662-4caa-be33-4bf2cc234ca7-kube-api-access-m7sgp\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.802293 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.802908 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.802961 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.824783 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7sgp\" (UniqueName: \"kubernetes.io/projected/28e082c3-f662-4caa-be33-4bf2cc234ca7-kube-api-access-m7sgp\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:01 crc kubenswrapper[4721]: I0128 18:47:01.990991 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:02 crc kubenswrapper[4721]: I0128 18:47:02.251573 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc"] Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.114728 4721 generic.go:334] "Generic (PLEG): container finished" podID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerID="ec7dd77ba6a00a5ebabfe1fdd9e162c5749b9f4728ffd11ab098b4ebccce3136" exitCode=0 Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.114815 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" event={"ID":"28e082c3-f662-4caa-be33-4bf2cc234ca7","Type":"ContainerDied","Data":"ec7dd77ba6a00a5ebabfe1fdd9e162c5749b9f4728ffd11ab098b4ebccce3136"} Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.114866 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" event={"ID":"28e082c3-f662-4caa-be33-4bf2cc234ca7","Type":"ContainerStarted","Data":"c6787d3e991463bacba423c00fe6b76b9a4c5c062ed23f4d349dc8bfece673bf"} Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.593553 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.597805 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.604628 4721 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-6598p" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.604866 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.606431 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.621036 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.738248 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") " pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.738306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jfb\" (UniqueName: \"kubernetes.io/projected/572683e0-6774-4286-82c7-9cade3187d6c-kube-api-access-h2jfb\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") " pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.839517 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") " pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.839577 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jfb\" (UniqueName: \"kubernetes.io/projected/572683e0-6774-4286-82c7-9cade3187d6c-kube-api-access-h2jfb\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") " pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.842591 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.842852 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/11610036103567c7dc24ea42666a5bf76d2a5d69f844bcfa07858274208502c3/globalmount\"" pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.866603 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jfb\" (UniqueName: \"kubernetes.io/projected/572683e0-6774-4286-82c7-9cade3187d6c-kube-api-access-h2jfb\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") " pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.871936 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7535dd8a-93a1-476a-a18d-e796974e0df0\") pod \"minio\" (UID: \"572683e0-6774-4286-82c7-9cade3187d6c\") " pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.927967 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.992685 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9xbv6"] Jan 28 18:47:03 crc kubenswrapper[4721]: I0128 18:47:03.997796 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.017154 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xbv6"] Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.143371 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-catalog-content\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.143496 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-utilities\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.143523 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grd5b\" (UniqueName: \"kubernetes.io/projected/36413909-776d-45ba-852d-a5d654e92970-kube-api-access-grd5b\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.245377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-catalog-content\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.246608 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-utilities\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.246627 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grd5b\" (UniqueName: \"kubernetes.io/projected/36413909-776d-45ba-852d-a5d654e92970-kube-api-access-grd5b\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.246530 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-catalog-content\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.247320 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-utilities\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.270675 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grd5b\" (UniqueName: \"kubernetes.io/projected/36413909-776d-45ba-852d-a5d654e92970-kube-api-access-grd5b\") pod \"redhat-operators-9xbv6\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.372646 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.564102 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 28 18:47:04 crc kubenswrapper[4721]: I0128 18:47:04.631525 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xbv6"] Jan 28 18:47:05 crc kubenswrapper[4721]: I0128 18:47:05.132599 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" event={"ID":"28e082c3-f662-4caa-be33-4bf2cc234ca7","Type":"ContainerStarted","Data":"1508cd9773be207788832af355f6d8f80a74ed4904701a153129f3cb05b1f05f"} Jan 28 18:47:05 crc kubenswrapper[4721]: I0128 18:47:05.134717 4721 generic.go:334] "Generic (PLEG): container finished" podID="36413909-776d-45ba-852d-a5d654e92970" containerID="e35381d0171a49ed5e60f1302da8cf9fee1288b8500854e3c19264dad4e3ab69" exitCode=0 Jan 28 18:47:05 crc kubenswrapper[4721]: I0128 18:47:05.134773 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerDied","Data":"e35381d0171a49ed5e60f1302da8cf9fee1288b8500854e3c19264dad4e3ab69"} Jan 28 18:47:05 crc kubenswrapper[4721]: I0128 18:47:05.134801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerStarted","Data":"cdf8066a39f11f4128d51572b0208300f9cac38fb819bd1e82c536088112d67d"} Jan 28 18:47:05 crc kubenswrapper[4721]: I0128 18:47:05.136785 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"572683e0-6774-4286-82c7-9cade3187d6c","Type":"ContainerStarted","Data":"34fa66bf7d9e23225c57897177b6630a73e502e4807ce502849cda5d8f274789"} Jan 28 18:47:06 crc kubenswrapper[4721]: I0128 18:47:06.146671 4721 generic.go:334] "Generic (PLEG): container finished" podID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerID="1508cd9773be207788832af355f6d8f80a74ed4904701a153129f3cb05b1f05f" exitCode=0 Jan 28 18:47:06 crc kubenswrapper[4721]: I0128 18:47:06.146737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" event={"ID":"28e082c3-f662-4caa-be33-4bf2cc234ca7","Type":"ContainerDied","Data":"1508cd9773be207788832af355f6d8f80a74ed4904701a153129f3cb05b1f05f"} Jan 28 18:47:07 crc kubenswrapper[4721]: I0128 18:47:07.154988 4721 generic.go:334] "Generic (PLEG): container finished" podID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerID="47b079161efec2d54851419627e3e7425313cf1354122cdcffe5289451ec60d6" exitCode=0 Jan 28 18:47:07 crc kubenswrapper[4721]: I0128 18:47:07.155200 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" event={"ID":"28e082c3-f662-4caa-be33-4bf2cc234ca7","Type":"ContainerDied","Data":"47b079161efec2d54851419627e3e7425313cf1354122cdcffe5289451ec60d6"} Jan 28 18:47:07 crc kubenswrapper[4721]: I0128 18:47:07.159352 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerStarted","Data":"f0527a9f54e179c91db3cc0a2df5f193fceaf4c64d96a9595d3e40e708a56990"} Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.649347 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.821279 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-bundle\") pod \"28e082c3-f662-4caa-be33-4bf2cc234ca7\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.822596 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7sgp\" (UniqueName: \"kubernetes.io/projected/28e082c3-f662-4caa-be33-4bf2cc234ca7-kube-api-access-m7sgp\") pod \"28e082c3-f662-4caa-be33-4bf2cc234ca7\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.822640 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-util\") pod \"28e082c3-f662-4caa-be33-4bf2cc234ca7\" (UID: \"28e082c3-f662-4caa-be33-4bf2cc234ca7\") " Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.822915 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-bundle" (OuterVolumeSpecName: "bundle") pod "28e082c3-f662-4caa-be33-4bf2cc234ca7" (UID: "28e082c3-f662-4caa-be33-4bf2cc234ca7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.823648 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.835785 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-util" (OuterVolumeSpecName: "util") pod "28e082c3-f662-4caa-be33-4bf2cc234ca7" (UID: "28e082c3-f662-4caa-be33-4bf2cc234ca7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.844667 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e082c3-f662-4caa-be33-4bf2cc234ca7-kube-api-access-m7sgp" (OuterVolumeSpecName: "kube-api-access-m7sgp") pod "28e082c3-f662-4caa-be33-4bf2cc234ca7" (UID: "28e082c3-f662-4caa-be33-4bf2cc234ca7"). InnerVolumeSpecName "kube-api-access-m7sgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.929555 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/28e082c3-f662-4caa-be33-4bf2cc234ca7-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:08 crc kubenswrapper[4721]: I0128 18:47:08.929599 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7sgp\" (UniqueName: \"kubernetes.io/projected/28e082c3-f662-4caa-be33-4bf2cc234ca7-kube-api-access-m7sgp\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:09 crc kubenswrapper[4721]: I0128 18:47:09.176462 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" event={"ID":"28e082c3-f662-4caa-be33-4bf2cc234ca7","Type":"ContainerDied","Data":"c6787d3e991463bacba423c00fe6b76b9a4c5c062ed23f4d349dc8bfece673bf"} Jan 28 18:47:09 crc kubenswrapper[4721]: I0128 18:47:09.176523 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6787d3e991463bacba423c00fe6b76b9a4c5c062ed23f4d349dc8bfece673bf" Jan 28 18:47:09 crc kubenswrapper[4721]: I0128 18:47:09.176542 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc" Jan 28 18:47:09 crc kubenswrapper[4721]: I0128 18:47:09.178799 4721 generic.go:334] "Generic (PLEG): container finished" podID="36413909-776d-45ba-852d-a5d654e92970" containerID="f0527a9f54e179c91db3cc0a2df5f193fceaf4c64d96a9595d3e40e708a56990" exitCode=0 Jan 28 18:47:09 crc kubenswrapper[4721]: I0128 18:47:09.178860 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerDied","Data":"f0527a9f54e179c91db3cc0a2df5f193fceaf4c64d96a9595d3e40e708a56990"} Jan 28 18:47:12 crc kubenswrapper[4721]: I0128 18:47:12.210277 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerStarted","Data":"c0448865bdac85677126ebeabc1ba5a609c642fef77d5ad480cc418452caa7f2"} Jan 28 18:47:12 crc kubenswrapper[4721]: I0128 18:47:12.212549 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"572683e0-6774-4286-82c7-9cade3187d6c","Type":"ContainerStarted","Data":"033fdaf4fc85bd6ef4e5ba04a04d74c45e17157b519ec289a17b4960cf581fd5"} Jan 28 18:47:12 crc kubenswrapper[4721]: I0128 18:47:12.230133 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9xbv6" podStartSLOduration=3.343892782 podStartE2EDuration="9.230114251s" podCreationTimestamp="2026-01-28 18:47:03 +0000 UTC" firstStartedPulling="2026-01-28 18:47:05.1363032 +0000 UTC m=+790.861608760" lastFinishedPulling="2026-01-28 18:47:11.022524669 +0000 UTC m=+796.747830229" observedRunningTime="2026-01-28 18:47:12.228856092 +0000 UTC m=+797.954161672" watchObservedRunningTime="2026-01-28 18:47:12.230114251 +0000 UTC m=+797.955419821" Jan 28 18:47:12 crc kubenswrapper[4721]: I0128 18:47:12.250049 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.785103757 podStartE2EDuration="11.250028893s" podCreationTimestamp="2026-01-28 18:47:01 +0000 UTC" firstStartedPulling="2026-01-28 18:47:04.576460282 +0000 UTC m=+790.301765842" lastFinishedPulling="2026-01-28 18:47:11.041385418 +0000 UTC m=+796.766690978" observedRunningTime="2026-01-28 18:47:12.247084171 +0000 UTC m=+797.972389741" watchObservedRunningTime="2026-01-28 18:47:12.250028893 +0000 UTC m=+797.975334453" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.373984 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.374862 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.659143 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c"] Jan 28 18:47:14 crc kubenswrapper[4721]: E0128 18:47:14.659828 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="pull" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.659844 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="pull" Jan 28 18:47:14 crc kubenswrapper[4721]: E0128 18:47:14.659861 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="util" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.659867 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="util" Jan 28 18:47:14 crc kubenswrapper[4721]: E0128 18:47:14.659879 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="extract" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.659885 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="extract" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.660093 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e082c3-f662-4caa-be33-4bf2cc234ca7" containerName="extract" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.660823 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.666811 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.666982 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.667098 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.667459 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-xrszd" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.673711 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.673806 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.687765 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c"] Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.838947 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-webhook-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.839017 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.839082 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4pmn\" (UniqueName: \"kubernetes.io/projected/8d99024b-2cf7-4372-98d3-2c282e9d7530-kube-api-access-s4pmn\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.839117 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-apiservice-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.839148 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8d99024b-2cf7-4372-98d3-2c282e9d7530-manager-config\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.941050 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-webhook-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.941113 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.941251 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4pmn\" (UniqueName: \"kubernetes.io/projected/8d99024b-2cf7-4372-98d3-2c282e9d7530-kube-api-access-s4pmn\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.941289 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-apiservice-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.941321 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8d99024b-2cf7-4372-98d3-2c282e9d7530-manager-config\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.942531 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8d99024b-2cf7-4372-98d3-2c282e9d7530-manager-config\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.952436 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-webhook-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.963073 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.967918 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d99024b-2cf7-4372-98d3-2c282e9d7530-apiservice-cert\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:14 crc kubenswrapper[4721]: I0128 18:47:14.988457 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4pmn\" (UniqueName: \"kubernetes.io/projected/8d99024b-2cf7-4372-98d3-2c282e9d7530-kube-api-access-s4pmn\") pod \"loki-operator-controller-manager-5bfcb79b6d-cd47c\" (UID: \"8d99024b-2cf7-4372-98d3-2c282e9d7530\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:15 crc kubenswrapper[4721]: I0128 18:47:15.278291 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:15 crc kubenswrapper[4721]: I0128 18:47:15.418446 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9xbv6" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="registry-server" probeResult="failure" output=< Jan 28 18:47:15 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:47:15 crc kubenswrapper[4721]: > Jan 28 18:47:15 crc kubenswrapper[4721]: I0128 18:47:15.791863 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c"] Jan 28 18:47:15 crc kubenswrapper[4721]: W0128 18:47:15.798582 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d99024b_2cf7_4372_98d3_2c282e9d7530.slice/crio-39c998b9f19be243d147cbe852e75cb2ba556035354630e022db87696b694286 WatchSource:0}: Error finding container 39c998b9f19be243d147cbe852e75cb2ba556035354630e022db87696b694286: Status 404 returned error can't find the container with id 39c998b9f19be243d147cbe852e75cb2ba556035354630e022db87696b694286 Jan 28 18:47:16 crc kubenswrapper[4721]: I0128 18:47:16.240249 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" event={"ID":"8d99024b-2cf7-4372-98d3-2c282e9d7530","Type":"ContainerStarted","Data":"39c998b9f19be243d147cbe852e75cb2ba556035354630e022db87696b694286"} Jan 28 18:47:23 crc kubenswrapper[4721]: I0128 18:47:23.297388 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" event={"ID":"8d99024b-2cf7-4372-98d3-2c282e9d7530","Type":"ContainerStarted","Data":"9c10d67ff544fca3af20b04d8da346197f74b53ba26b8514772ae84ecd8a4357"} Jan 28 18:47:24 crc kubenswrapper[4721]: I0128 18:47:24.428347 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:24 crc kubenswrapper[4721]: I0128 18:47:24.485046 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:25 crc kubenswrapper[4721]: I0128 18:47:25.376040 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xbv6"] Jan 28 18:47:26 crc kubenswrapper[4721]: I0128 18:47:26.318716 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9xbv6" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="registry-server" containerID="cri-o://c0448865bdac85677126ebeabc1ba5a609c642fef77d5ad480cc418452caa7f2" gracePeriod=2 Jan 28 18:47:27 crc kubenswrapper[4721]: I0128 18:47:27.328367 4721 generic.go:334] "Generic (PLEG): container finished" podID="36413909-776d-45ba-852d-a5d654e92970" containerID="c0448865bdac85677126ebeabc1ba5a609c642fef77d5ad480cc418452caa7f2" exitCode=0 Jan 28 18:47:27 crc kubenswrapper[4721]: I0128 18:47:27.328419 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerDied","Data":"c0448865bdac85677126ebeabc1ba5a609c642fef77d5ad480cc418452caa7f2"} Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.178325 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.311022 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grd5b\" (UniqueName: \"kubernetes.io/projected/36413909-776d-45ba-852d-a5d654e92970-kube-api-access-grd5b\") pod \"36413909-776d-45ba-852d-a5d654e92970\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.311098 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-catalog-content\") pod \"36413909-776d-45ba-852d-a5d654e92970\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.311221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-utilities\") pod \"36413909-776d-45ba-852d-a5d654e92970\" (UID: \"36413909-776d-45ba-852d-a5d654e92970\") " Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.312156 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-utilities" (OuterVolumeSpecName: "utilities") pod "36413909-776d-45ba-852d-a5d654e92970" (UID: "36413909-776d-45ba-852d-a5d654e92970"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.338506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36413909-776d-45ba-852d-a5d654e92970-kube-api-access-grd5b" (OuterVolumeSpecName: "kube-api-access-grd5b") pod "36413909-776d-45ba-852d-a5d654e92970" (UID: "36413909-776d-45ba-852d-a5d654e92970"). InnerVolumeSpecName "kube-api-access-grd5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.368220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xbv6" event={"ID":"36413909-776d-45ba-852d-a5d654e92970","Type":"ContainerDied","Data":"cdf8066a39f11f4128d51572b0208300f9cac38fb819bd1e82c536088112d67d"} Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.368324 4721 scope.go:117] "RemoveContainer" containerID="c0448865bdac85677126ebeabc1ba5a609c642fef77d5ad480cc418452caa7f2" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.368566 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xbv6" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.391924 4721 scope.go:117] "RemoveContainer" containerID="f0527a9f54e179c91db3cc0a2df5f193fceaf4c64d96a9595d3e40e708a56990" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.408277 4721 scope.go:117] "RemoveContainer" containerID="e35381d0171a49ed5e60f1302da8cf9fee1288b8500854e3c19264dad4e3ab69" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.412618 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grd5b\" (UniqueName: \"kubernetes.io/projected/36413909-776d-45ba-852d-a5d654e92970-kube-api-access-grd5b\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.412640 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.431782 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36413909-776d-45ba-852d-a5d654e92970" (UID: "36413909-776d-45ba-852d-a5d654e92970"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.515308 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36413909-776d-45ba-852d-a5d654e92970-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.698083 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xbv6"] Jan 28 18:47:30 crc kubenswrapper[4721]: I0128 18:47:30.702090 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9xbv6"] Jan 28 18:47:31 crc kubenswrapper[4721]: I0128 18:47:31.538410 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36413909-776d-45ba-852d-a5d654e92970" path="/var/lib/kubelet/pods/36413909-776d-45ba-852d-a5d654e92970/volumes" Jan 28 18:47:32 crc kubenswrapper[4721]: I0128 18:47:32.384305 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" event={"ID":"8d99024b-2cf7-4372-98d3-2c282e9d7530","Type":"ContainerStarted","Data":"94d737ca35f9c6f62d64d7fb5240f2407672c30803144d686deac226ff23715c"} Jan 28 18:47:32 crc kubenswrapper[4721]: I0128 18:47:32.384797 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:32 crc kubenswrapper[4721]: I0128 18:47:32.386962 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" Jan 28 18:47:32 crc kubenswrapper[4721]: I0128 18:47:32.411447 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5bfcb79b6d-cd47c" podStartSLOduration=3.045655204 podStartE2EDuration="18.411421357s" podCreationTimestamp="2026-01-28 18:47:14 +0000 UTC" firstStartedPulling="2026-01-28 18:47:15.802356788 +0000 UTC m=+801.527662348" lastFinishedPulling="2026-01-28 18:47:31.168122941 +0000 UTC m=+816.893428501" observedRunningTime="2026-01-28 18:47:32.405482483 +0000 UTC m=+818.130788063" watchObservedRunningTime="2026-01-28 18:47:32.411421357 +0000 UTC m=+818.136726907" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.527845 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8"] Jan 28 18:47:59 crc kubenswrapper[4721]: E0128 18:47:59.529713 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="extract-content" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.529786 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="extract-content" Jan 28 18:47:59 crc kubenswrapper[4721]: E0128 18:47:59.529853 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="registry-server" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.529905 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="registry-server" Jan 28 18:47:59 crc kubenswrapper[4721]: E0128 18:47:59.529965 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="extract-utilities" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.530023 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="extract-utilities" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.530215 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="36413909-776d-45ba-852d-a5d654e92970" containerName="registry-server" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.531231 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.533967 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.557684 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8"] Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.626074 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zjw\" (UniqueName: \"kubernetes.io/projected/f0d234c7-c326-453d-aef0-f50829390a73-kube-api-access-77zjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.626137 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.626197 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.727027 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77zjw\" (UniqueName: \"kubernetes.io/projected/f0d234c7-c326-453d-aef0-f50829390a73-kube-api-access-77zjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.727083 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.727122 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.727736 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.727872 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.748310 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77zjw\" (UniqueName: \"kubernetes.io/projected/f0d234c7-c326-453d-aef0-f50829390a73-kube-api-access-77zjw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:47:59 crc kubenswrapper[4721]: I0128 18:47:59.852518 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:48:00 crc kubenswrapper[4721]: I0128 18:48:00.135927 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8"] Jan 28 18:48:00 crc kubenswrapper[4721]: W0128 18:48:00.150450 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0d234c7_c326_453d_aef0_f50829390a73.slice/crio-3df865c59f26e24662594c288f10a997a5d250093b21cf003858f8b4cc06c0f8 WatchSource:0}: Error finding container 3df865c59f26e24662594c288f10a997a5d250093b21cf003858f8b4cc06c0f8: Status 404 returned error can't find the container with id 3df865c59f26e24662594c288f10a997a5d250093b21cf003858f8b4cc06c0f8 Jan 28 18:48:00 crc kubenswrapper[4721]: I0128 18:48:00.558683 4721 generic.go:334] "Generic (PLEG): container finished" podID="f0d234c7-c326-453d-aef0-f50829390a73" containerID="13ffef24c2703e8875148a702e512e13b5b8c24ccfa625ee5436fb71756ccbdb" exitCode=0 Jan 28 18:48:00 crc kubenswrapper[4721]: I0128 18:48:00.558739 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" event={"ID":"f0d234c7-c326-453d-aef0-f50829390a73","Type":"ContainerDied","Data":"13ffef24c2703e8875148a702e512e13b5b8c24ccfa625ee5436fb71756ccbdb"} Jan 28 18:48:00 crc kubenswrapper[4721]: I0128 18:48:00.559023 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" event={"ID":"f0d234c7-c326-453d-aef0-f50829390a73","Type":"ContainerStarted","Data":"3df865c59f26e24662594c288f10a997a5d250093b21cf003858f8b4cc06c0f8"} Jan 28 18:48:03 crc kubenswrapper[4721]: I0128 18:48:03.577511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" event={"ID":"f0d234c7-c326-453d-aef0-f50829390a73","Type":"ContainerStarted","Data":"ba6fd589dc1344aa053028b437e54e1cdc1bea6b0f27d283902873cd1701bd8c"} Jan 28 18:48:04 crc kubenswrapper[4721]: I0128 18:48:04.585873 4721 generic.go:334] "Generic (PLEG): container finished" podID="f0d234c7-c326-453d-aef0-f50829390a73" containerID="ba6fd589dc1344aa053028b437e54e1cdc1bea6b0f27d283902873cd1701bd8c" exitCode=0 Jan 28 18:48:04 crc kubenswrapper[4721]: I0128 18:48:04.586274 4721 generic.go:334] "Generic (PLEG): container finished" podID="f0d234c7-c326-453d-aef0-f50829390a73" containerID="efe4d11ffbb46b5240691caa821d2bb6518cc0deff8756fad6f13d890445f105" exitCode=0 Jan 28 18:48:04 crc kubenswrapper[4721]: I0128 18:48:04.585988 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" event={"ID":"f0d234c7-c326-453d-aef0-f50829390a73","Type":"ContainerDied","Data":"ba6fd589dc1344aa053028b437e54e1cdc1bea6b0f27d283902873cd1701bd8c"} Jan 28 18:48:04 crc kubenswrapper[4721]: I0128 18:48:04.586324 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" event={"ID":"f0d234c7-c326-453d-aef0-f50829390a73","Type":"ContainerDied","Data":"efe4d11ffbb46b5240691caa821d2bb6518cc0deff8756fad6f13d890445f105"} Jan 28 18:48:05 crc kubenswrapper[4721]: I0128 18:48:05.861115 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.018003 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77zjw\" (UniqueName: \"kubernetes.io/projected/f0d234c7-c326-453d-aef0-f50829390a73-kube-api-access-77zjw\") pod \"f0d234c7-c326-453d-aef0-f50829390a73\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.018070 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-util\") pod \"f0d234c7-c326-453d-aef0-f50829390a73\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.018269 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-bundle\") pod \"f0d234c7-c326-453d-aef0-f50829390a73\" (UID: \"f0d234c7-c326-453d-aef0-f50829390a73\") " Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.019129 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-bundle" (OuterVolumeSpecName: "bundle") pod "f0d234c7-c326-453d-aef0-f50829390a73" (UID: "f0d234c7-c326-453d-aef0-f50829390a73"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.026749 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d234c7-c326-453d-aef0-f50829390a73-kube-api-access-77zjw" (OuterVolumeSpecName: "kube-api-access-77zjw") pod "f0d234c7-c326-453d-aef0-f50829390a73" (UID: "f0d234c7-c326-453d-aef0-f50829390a73"). InnerVolumeSpecName "kube-api-access-77zjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.059713 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-util" (OuterVolumeSpecName: "util") pod "f0d234c7-c326-453d-aef0-f50829390a73" (UID: "f0d234c7-c326-453d-aef0-f50829390a73"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.119603 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.119725 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77zjw\" (UniqueName: \"kubernetes.io/projected/f0d234c7-c326-453d-aef0-f50829390a73-kube-api-access-77zjw\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.119748 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0d234c7-c326-453d-aef0-f50829390a73-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.612147 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" event={"ID":"f0d234c7-c326-453d-aef0-f50829390a73","Type":"ContainerDied","Data":"3df865c59f26e24662594c288f10a997a5d250093b21cf003858f8b4cc06c0f8"} Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.612206 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df865c59f26e24662594c288f10a997a5d250093b21cf003858f8b4cc06c0f8" Jan 28 18:48:06 crc kubenswrapper[4721]: I0128 18:48:06.612265 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.611460 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-26llr"] Jan 28 18:48:08 crc kubenswrapper[4721]: E0128 18:48:08.612284 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="pull" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.612299 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="pull" Jan 28 18:48:08 crc kubenswrapper[4721]: E0128 18:48:08.612311 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="extract" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.612317 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="extract" Jan 28 18:48:08 crc kubenswrapper[4721]: E0128 18:48:08.612338 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="util" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.612345 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="util" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.612505 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d234c7-c326-453d-aef0-f50829390a73" containerName="extract" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.613080 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-26llr" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.616264 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.616267 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bxd6c" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.616315 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.627729 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-26llr"] Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.756779 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcj7q\" (UniqueName: \"kubernetes.io/projected/2498df5a-d126-45bd-b53b-9beeedc256b7-kube-api-access-dcj7q\") pod \"nmstate-operator-646758c888-26llr\" (UID: \"2498df5a-d126-45bd-b53b-9beeedc256b7\") " pod="openshift-nmstate/nmstate-operator-646758c888-26llr" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.858054 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj7q\" (UniqueName: \"kubernetes.io/projected/2498df5a-d126-45bd-b53b-9beeedc256b7-kube-api-access-dcj7q\") pod \"nmstate-operator-646758c888-26llr\" (UID: \"2498df5a-d126-45bd-b53b-9beeedc256b7\") " pod="openshift-nmstate/nmstate-operator-646758c888-26llr" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.897280 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj7q\" (UniqueName: \"kubernetes.io/projected/2498df5a-d126-45bd-b53b-9beeedc256b7-kube-api-access-dcj7q\") pod \"nmstate-operator-646758c888-26llr\" (UID: \"2498df5a-d126-45bd-b53b-9beeedc256b7\") " pod="openshift-nmstate/nmstate-operator-646758c888-26llr" Jan 28 18:48:08 crc kubenswrapper[4721]: I0128 18:48:08.948822 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-26llr" Jan 28 18:48:09 crc kubenswrapper[4721]: I0128 18:48:09.242656 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-26llr"] Jan 28 18:48:09 crc kubenswrapper[4721]: I0128 18:48:09.634746 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-26llr" event={"ID":"2498df5a-d126-45bd-b53b-9beeedc256b7","Type":"ContainerStarted","Data":"72f29ae80648bb0388daf85d86219322518ab95ce12ee087d088e0f671186e23"} Jan 28 18:48:12 crc kubenswrapper[4721]: I0128 18:48:12.656960 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-26llr" event={"ID":"2498df5a-d126-45bd-b53b-9beeedc256b7","Type":"ContainerStarted","Data":"f814d62346eb2f62e0c0371af9228d5c028af1df96f0a354c50ab8ed9665c067"} Jan 28 18:48:12 crc kubenswrapper[4721]: I0128 18:48:12.678989 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-26llr" podStartSLOduration=2.43238472 podStartE2EDuration="4.678963184s" podCreationTimestamp="2026-01-28 18:48:08 +0000 UTC" firstStartedPulling="2026-01-28 18:48:09.269433613 +0000 UTC m=+854.994739173" lastFinishedPulling="2026-01-28 18:48:11.516012077 +0000 UTC m=+857.241317637" observedRunningTime="2026-01-28 18:48:12.67406582 +0000 UTC m=+858.399371410" watchObservedRunningTime="2026-01-28 18:48:12.678963184 +0000 UTC m=+858.404268744" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.576968 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rwjnr"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.580994 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.587522 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jgcbs" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.594972 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.596238 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.599927 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.613317 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rwjnr"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.650390 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.655582 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4wqcf"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.656628 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.737191 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4v2\" (UniqueName: \"kubernetes.io/projected/fda999b5-6a00-4137-817e-b7d5417a2d2e-kube-api-access-9x4v2\") pod \"nmstate-metrics-54757c584b-rwjnr\" (UID: \"fda999b5-6a00-4137-817e-b7d5417a2d2e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.737264 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.737562 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9pk\" (UniqueName: \"kubernetes.io/projected/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-kube-api-access-rh9pk\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.827128 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.828414 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.831942 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.833715 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nklgp" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.833722 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839366 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-dbus-socket\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839441 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-nmstate-lock\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839476 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-ovs-socket\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839540 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh9pk\" (UniqueName: \"kubernetes.io/projected/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-kube-api-access-rh9pk\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839584 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4v2\" (UniqueName: \"kubernetes.io/projected/fda999b5-6a00-4137-817e-b7d5417a2d2e-kube-api-access-9x4v2\") pod \"nmstate-metrics-54757c584b-rwjnr\" (UID: \"fda999b5-6a00-4137-817e-b7d5417a2d2e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839613 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.839646 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szxbw\" (UniqueName: \"kubernetes.io/projected/cf95e16e-0533-4d53-a185-3c62adb9e573-kube-api-access-szxbw\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: E0128 18:48:13.840065 4721 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 18:48:13 crc kubenswrapper[4721]: E0128 18:48:13.840158 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-tls-key-pair podName:a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b nodeName:}" failed. No retries permitted until 2026-01-28 18:48:14.340132194 +0000 UTC m=+860.065437754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-9rp4b" (UID: "a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b") : secret "openshift-nmstate-webhook" not found Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.853037 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9"] Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.868889 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh9pk\" (UniqueName: \"kubernetes.io/projected/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-kube-api-access-rh9pk\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.888905 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4v2\" (UniqueName: \"kubernetes.io/projected/fda999b5-6a00-4137-817e-b7d5417a2d2e-kube-api-access-9x4v2\") pod \"nmstate-metrics-54757c584b-rwjnr\" (UID: \"fda999b5-6a00-4137-817e-b7d5417a2d2e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.908253 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943375 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b54106-b20d-4911-a9e2-90d5539bb4d7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943480 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szxbw\" (UniqueName: \"kubernetes.io/projected/cf95e16e-0533-4d53-a185-3c62adb9e573-kube-api-access-szxbw\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943521 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlcd\" (UniqueName: \"kubernetes.io/projected/c7b54106-b20d-4911-a9e2-90d5539bb4d7-kube-api-access-vhlcd\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943572 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-dbus-socket\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943593 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c7b54106-b20d-4911-a9e2-90d5539bb4d7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943617 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-nmstate-lock\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943647 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-ovs-socket\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.943755 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-ovs-socket\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.944476 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-nmstate-lock\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.944537 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf95e16e-0533-4d53-a185-3c62adb9e573-dbus-socket\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:13 crc kubenswrapper[4721]: I0128 18:48:13.975211 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szxbw\" (UniqueName: \"kubernetes.io/projected/cf95e16e-0533-4d53-a185-3c62adb9e573-kube-api-access-szxbw\") pod \"nmstate-handler-4wqcf\" (UID: \"cf95e16e-0533-4d53-a185-3c62adb9e573\") " pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.048282 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b54106-b20d-4911-a9e2-90d5539bb4d7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.048376 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlcd\" (UniqueName: \"kubernetes.io/projected/c7b54106-b20d-4911-a9e2-90d5539bb4d7-kube-api-access-vhlcd\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.048418 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c7b54106-b20d-4911-a9e2-90d5539bb4d7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: E0128 18:48:14.048937 4721 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 18:48:14 crc kubenswrapper[4721]: E0128 18:48:14.049025 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7b54106-b20d-4911-a9e2-90d5539bb4d7-plugin-serving-cert podName:c7b54106-b20d-4911-a9e2-90d5539bb4d7 nodeName:}" failed. No retries permitted until 2026-01-28 18:48:14.549002824 +0000 UTC m=+860.274308384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c7b54106-b20d-4911-a9e2-90d5539bb4d7-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-qxhd9" (UID: "c7b54106-b20d-4911-a9e2-90d5539bb4d7") : secret "plugin-serving-cert" not found Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.054680 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c7b54106-b20d-4911-a9e2-90d5539bb4d7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.089961 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-845d8b57db-lrhn4"] Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.097830 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.105525 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-845d8b57db-lrhn4"] Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.124498 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlcd\" (UniqueName: \"kubernetes.io/projected/c7b54106-b20d-4911-a9e2-90d5539bb4d7-kube-api-access-vhlcd\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.241293 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rwjnr"] Jan 28 18:48:14 crc kubenswrapper[4721]: W0128 18:48:14.250741 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda999b5_6a00_4137_817e_b7d5417a2d2e.slice/crio-accceee1bcf979d1caf265f3e94b116bc9b49e73abd405984a6b6fac5042dac5 WatchSource:0}: Error finding container accceee1bcf979d1caf265f3e94b116bc9b49e73abd405984a6b6fac5042dac5: Status 404 returned error can't find the container with id accceee1bcf979d1caf265f3e94b116bc9b49e73abd405984a6b6fac5042dac5 Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.255491 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-service-ca\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.255646 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-trusted-ca-bundle\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.259798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-serving-cert\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.260114 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-oauth-config\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.260316 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-oauth-serving-cert\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.260373 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmhr\" (UniqueName: \"kubernetes.io/projected/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-kube-api-access-xqmhr\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.260438 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-config\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.279090 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.364523 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-oauth-config\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-oauth-serving-cert\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365436 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmhr\" (UniqueName: \"kubernetes.io/projected/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-kube-api-access-xqmhr\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365518 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-config\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365689 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-service-ca\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365768 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365845 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-trusted-ca-bundle\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.365922 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-serving-cert\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.370114 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-config\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.370114 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-oauth-serving-cert\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.373700 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-oauth-config\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.377407 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-service-ca\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.378075 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9rp4b\" (UID: \"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.378278 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-trusted-ca-bundle\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.379116 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-console-serving-cert\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.414600 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmhr\" (UniqueName: \"kubernetes.io/projected/c5c09220-f732-4fb2-aad6-9e6c522c3ba8-kube-api-access-xqmhr\") pod \"console-845d8b57db-lrhn4\" (UID: \"c5c09220-f732-4fb2-aad6-9e6c522c3ba8\") " pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.434339 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.517921 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.570989 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b54106-b20d-4911-a9e2-90d5539bb4d7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.577764 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7b54106-b20d-4911-a9e2-90d5539bb4d7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qxhd9\" (UID: \"c7b54106-b20d-4911-a9e2-90d5539bb4d7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.678109 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4wqcf" event={"ID":"cf95e16e-0533-4d53-a185-3c62adb9e573","Type":"ContainerStarted","Data":"f34102a86497e05a96221bb8c22282d16015e64ae593ca4b0666e17037fde449"} Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.682556 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" event={"ID":"fda999b5-6a00-4137-817e-b7d5417a2d2e","Type":"ContainerStarted","Data":"accceee1bcf979d1caf265f3e94b116bc9b49e73abd405984a6b6fac5042dac5"} Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.717434 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-845d8b57db-lrhn4"] Jan 28 18:48:14 crc kubenswrapper[4721]: W0128 18:48:14.721527 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c09220_f732_4fb2_aad6_9e6c522c3ba8.slice/crio-9208b708451dbd5626ffbe63ee140d8c4b460cb03264524bc561ff7305bfd4ef WatchSource:0}: Error finding container 9208b708451dbd5626ffbe63ee140d8c4b460cb03264524bc561ff7305bfd4ef: Status 404 returned error can't find the container with id 9208b708451dbd5626ffbe63ee140d8c4b460cb03264524bc561ff7305bfd4ef Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.759150 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" Jan 28 18:48:14 crc kubenswrapper[4721]: I0128 18:48:14.809767 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b"] Jan 28 18:48:15 crc kubenswrapper[4721]: I0128 18:48:15.036712 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9"] Jan 28 18:48:15 crc kubenswrapper[4721]: I0128 18:48:15.690979 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-845d8b57db-lrhn4" event={"ID":"c5c09220-f732-4fb2-aad6-9e6c522c3ba8","Type":"ContainerStarted","Data":"f3b0b34897f29b75c913e3b1544430e5ef0c191f0a60d321e3ede905da743d94"} Jan 28 18:48:15 crc kubenswrapper[4721]: I0128 18:48:15.691601 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-845d8b57db-lrhn4" event={"ID":"c5c09220-f732-4fb2-aad6-9e6c522c3ba8","Type":"ContainerStarted","Data":"9208b708451dbd5626ffbe63ee140d8c4b460cb03264524bc561ff7305bfd4ef"} Jan 28 18:48:15 crc kubenswrapper[4721]: I0128 18:48:15.693984 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" event={"ID":"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b","Type":"ContainerStarted","Data":"9d32e0870ba3650cb7be34e27fe57442b643cf43559030f28679be9793512fec"} Jan 28 18:48:15 crc kubenswrapper[4721]: I0128 18:48:15.694948 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" event={"ID":"c7b54106-b20d-4911-a9e2-90d5539bb4d7","Type":"ContainerStarted","Data":"2be514e23b62eb73130bc59eeb2b83d59f11972b5c5106c1163216cf786bf5cd"} Jan 28 18:48:15 crc kubenswrapper[4721]: I0128 18:48:15.723863 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-845d8b57db-lrhn4" podStartSLOduration=1.72383754 podStartE2EDuration="1.72383754s" podCreationTimestamp="2026-01-28 18:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:48:15.718189453 +0000 UTC m=+861.443495043" watchObservedRunningTime="2026-01-28 18:48:15.72383754 +0000 UTC m=+861.449143100" Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.728043 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" event={"ID":"fda999b5-6a00-4137-817e-b7d5417a2d2e","Type":"ContainerStarted","Data":"467d2c905c10d987ab2d286b332db18e6e589ea2fbc27ccdd6b5e13285fe259c"} Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.732078 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" event={"ID":"a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b","Type":"ContainerStarted","Data":"aafd0597c55bd661e95200adf7824751ee48ce54e2683978952340ea295dbca9"} Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.732180 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.738525 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4wqcf" event={"ID":"cf95e16e-0533-4d53-a185-3c62adb9e573","Type":"ContainerStarted","Data":"03651e30e455f0f4aacbe5f719681c791137dc7c8a2fe51e291062701f4128bd"} Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.739062 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.754729 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" podStartSLOduration=2.7135432379999997 podStartE2EDuration="5.754704879s" podCreationTimestamp="2026-01-28 18:48:13 +0000 UTC" firstStartedPulling="2026-01-28 18:48:14.847796191 +0000 UTC m=+860.573101751" lastFinishedPulling="2026-01-28 18:48:17.888957832 +0000 UTC m=+863.614263392" observedRunningTime="2026-01-28 18:48:18.748136453 +0000 UTC m=+864.473442023" watchObservedRunningTime="2026-01-28 18:48:18.754704879 +0000 UTC m=+864.480010439" Jan 28 18:48:18 crc kubenswrapper[4721]: I0128 18:48:18.774802 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4wqcf" podStartSLOduration=2.345930492 podStartE2EDuration="5.774782969s" podCreationTimestamp="2026-01-28 18:48:13 +0000 UTC" firstStartedPulling="2026-01-28 18:48:14.336569131 +0000 UTC m=+860.061874691" lastFinishedPulling="2026-01-28 18:48:17.765421608 +0000 UTC m=+863.490727168" observedRunningTime="2026-01-28 18:48:18.768285385 +0000 UTC m=+864.493590955" watchObservedRunningTime="2026-01-28 18:48:18.774782969 +0000 UTC m=+864.500088529" Jan 28 18:48:19 crc kubenswrapper[4721]: I0128 18:48:19.745966 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" event={"ID":"c7b54106-b20d-4911-a9e2-90d5539bb4d7","Type":"ContainerStarted","Data":"b47a1d3908fbaf1aac1d8e4b3ecc2edba0d7664cbbaea450f3bda4d448a85cf4"} Jan 28 18:48:19 crc kubenswrapper[4721]: I0128 18:48:19.770000 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qxhd9" podStartSLOduration=2.782969705 podStartE2EDuration="6.769978584s" podCreationTimestamp="2026-01-28 18:48:13 +0000 UTC" firstStartedPulling="2026-01-28 18:48:15.048967729 +0000 UTC m=+860.774273289" lastFinishedPulling="2026-01-28 18:48:19.035976608 +0000 UTC m=+864.761282168" observedRunningTime="2026-01-28 18:48:19.763054908 +0000 UTC m=+865.488360478" watchObservedRunningTime="2026-01-28 18:48:19.769978584 +0000 UTC m=+865.495284144" Jan 28 18:48:20 crc kubenswrapper[4721]: I0128 18:48:20.758469 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" event={"ID":"fda999b5-6a00-4137-817e-b7d5417a2d2e","Type":"ContainerStarted","Data":"607d8150b4dafb0a810d7a634a062e56c9df97d674c7ce0d4d2d049a759b0eda"} Jan 28 18:48:20 crc kubenswrapper[4721]: I0128 18:48:20.780267 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rwjnr" podStartSLOduration=1.570253459 podStartE2EDuration="7.780245923s" podCreationTimestamp="2026-01-28 18:48:13 +0000 UTC" firstStartedPulling="2026-01-28 18:48:14.256316405 +0000 UTC m=+859.981621965" lastFinishedPulling="2026-01-28 18:48:20.466308869 +0000 UTC m=+866.191614429" observedRunningTime="2026-01-28 18:48:20.778454537 +0000 UTC m=+866.503760107" watchObservedRunningTime="2026-01-28 18:48:20.780245923 +0000 UTC m=+866.505551483" Jan 28 18:48:24 crc kubenswrapper[4721]: I0128 18:48:24.310658 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4wqcf" Jan 28 18:48:24 crc kubenswrapper[4721]: I0128 18:48:24.435452 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:24 crc kubenswrapper[4721]: I0128 18:48:24.435694 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:24 crc kubenswrapper[4721]: I0128 18:48:24.440228 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:24 crc kubenswrapper[4721]: I0128 18:48:24.816067 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-845d8b57db-lrhn4" Jan 28 18:48:24 crc kubenswrapper[4721]: I0128 18:48:24.891643 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ct2hz"] Jan 28 18:48:34 crc kubenswrapper[4721]: I0128 18:48:34.525219 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9rp4b" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.448367 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6"] Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.450613 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.452675 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.477264 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6"] Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.504553 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.504631 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.504708 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlrd\" (UniqueName: \"kubernetes.io/projected/2c37d643-cddf-40c7-ad82-e999634e0151-kube-api-access-kjlrd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.606253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjlrd\" (UniqueName: \"kubernetes.io/projected/2c37d643-cddf-40c7-ad82-e999634e0151-kube-api-access-kjlrd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.606637 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.606697 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.607763 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.608011 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.629153 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjlrd\" (UniqueName: \"kubernetes.io/projected/2c37d643-cddf-40c7-ad82-e999634e0151-kube-api-access-kjlrd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:48 crc kubenswrapper[4721]: I0128 18:48:48.780306 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:49 crc kubenswrapper[4721]: I0128 18:48:49.319815 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6"] Jan 28 18:48:49 crc kubenswrapper[4721]: I0128 18:48:49.956506 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c37d643-cddf-40c7-ad82-e999634e0151" containerID="c7d1243378aae41b346e1b497a4355aac1e12b2daa59429f56bea8df5abfc54f" exitCode=0 Jan 28 18:48:49 crc kubenswrapper[4721]: I0128 18:48:49.956580 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" event={"ID":"2c37d643-cddf-40c7-ad82-e999634e0151","Type":"ContainerDied","Data":"c7d1243378aae41b346e1b497a4355aac1e12b2daa59429f56bea8df5abfc54f"} Jan 28 18:48:49 crc kubenswrapper[4721]: I0128 18:48:49.956890 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" event={"ID":"2c37d643-cddf-40c7-ad82-e999634e0151","Type":"ContainerStarted","Data":"cf04163390076b49a3eb1b0500449dc846c4711c5f5592f71d68d79cdfd1338f"} Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.021540 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ct2hz" podUID="52b4f91f-7c7b-401a-82b0-8907f6880677" containerName="console" containerID="cri-o://bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050" gracePeriod=15 Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.581000 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ct2hz_52b4f91f-7c7b-401a-82b0-8907f6880677/console/0.log" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.581083 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635382 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-serving-cert\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635447 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wf27\" (UniqueName: \"kubernetes.io/projected/52b4f91f-7c7b-401a-82b0-8907f6880677-kube-api-access-8wf27\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635511 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-service-ca\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635534 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-console-config\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635558 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-trusted-ca-bundle\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635579 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-oauth-config\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.635606 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-oauth-serving-cert\") pod \"52b4f91f-7c7b-401a-82b0-8907f6880677\" (UID: \"52b4f91f-7c7b-401a-82b0-8907f6880677\") " Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.636608 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.636650 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.636691 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-console-config" (OuterVolumeSpecName: "console-config") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.637358 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-service-ca" (OuterVolumeSpecName: "service-ca") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.641442 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b4f91f-7c7b-401a-82b0-8907f6880677-kube-api-access-8wf27" (OuterVolumeSpecName: "kube-api-access-8wf27") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "kube-api-access-8wf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.641696 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.641825 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "52b4f91f-7c7b-401a-82b0-8907f6880677" (UID: "52b4f91f-7c7b-401a-82b0-8907f6880677"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736491 4721 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736533 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wf27\" (UniqueName: \"kubernetes.io/projected/52b4f91f-7c7b-401a-82b0-8907f6880677-kube-api-access-8wf27\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736545 4721 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736553 4721 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736564 4721 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736573 4721 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/52b4f91f-7c7b-401a-82b0-8907f6880677-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.736582 4721 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/52b4f91f-7c7b-401a-82b0-8907f6880677-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.965884 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ct2hz_52b4f91f-7c7b-401a-82b0-8907f6880677/console/0.log" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.965936 4721 generic.go:334] "Generic (PLEG): container finished" podID="52b4f91f-7c7b-401a-82b0-8907f6880677" containerID="bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050" exitCode=2 Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.965971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ct2hz" event={"ID":"52b4f91f-7c7b-401a-82b0-8907f6880677","Type":"ContainerDied","Data":"bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050"} Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.966011 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ct2hz" event={"ID":"52b4f91f-7c7b-401a-82b0-8907f6880677","Type":"ContainerDied","Data":"45cb2ef595adc47daf34972d7b4752a67370dc132a08c20dbe619e3365c51846"} Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.966036 4721 scope.go:117] "RemoveContainer" containerID="bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.966075 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ct2hz" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.988481 4721 scope.go:117] "RemoveContainer" containerID="bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050" Jan 28 18:48:50 crc kubenswrapper[4721]: E0128 18:48:50.989139 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050\": container with ID starting with bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050 not found: ID does not exist" containerID="bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050" Jan 28 18:48:50 crc kubenswrapper[4721]: I0128 18:48:50.989189 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050"} err="failed to get container status \"bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050\": rpc error: code = NotFound desc = could not find container \"bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050\": container with ID starting with bd95f1b18fd86907975a8dfb48da6dd4616b684110232787612d240fd73a2050 not found: ID does not exist" Jan 28 18:48:51 crc kubenswrapper[4721]: I0128 18:48:51.002858 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ct2hz"] Jan 28 18:48:51 crc kubenswrapper[4721]: I0128 18:48:51.008845 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ct2hz"] Jan 28 18:48:51 crc kubenswrapper[4721]: I0128 18:48:51.537405 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b4f91f-7c7b-401a-82b0-8907f6880677" path="/var/lib/kubelet/pods/52b4f91f-7c7b-401a-82b0-8907f6880677/volumes" Jan 28 18:48:51 crc kubenswrapper[4721]: I0128 18:48:51.975616 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c37d643-cddf-40c7-ad82-e999634e0151" containerID="2466c3c7e462b199562c93880dedc1fe819630ceee438ea7f0eb3be6c7f8fc13" exitCode=0 Jan 28 18:48:51 crc kubenswrapper[4721]: I0128 18:48:51.975669 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" event={"ID":"2c37d643-cddf-40c7-ad82-e999634e0151","Type":"ContainerDied","Data":"2466c3c7e462b199562c93880dedc1fe819630ceee438ea7f0eb3be6c7f8fc13"} Jan 28 18:48:52 crc kubenswrapper[4721]: I0128 18:48:52.986776 4721 generic.go:334] "Generic (PLEG): container finished" podID="2c37d643-cddf-40c7-ad82-e999634e0151" containerID="0e4ccc0ad12ac3d1dbc052de103f7b1cb9385707bd0570b39b98aad42337d455" exitCode=0 Jan 28 18:48:52 crc kubenswrapper[4721]: I0128 18:48:52.986893 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" event={"ID":"2c37d643-cddf-40c7-ad82-e999634e0151","Type":"ContainerDied","Data":"0e4ccc0ad12ac3d1dbc052de103f7b1cb9385707bd0570b39b98aad42337d455"} Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.247440 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.383868 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-util\") pod \"2c37d643-cddf-40c7-ad82-e999634e0151\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.383999 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-bundle\") pod \"2c37d643-cddf-40c7-ad82-e999634e0151\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.384076 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjlrd\" (UniqueName: \"kubernetes.io/projected/2c37d643-cddf-40c7-ad82-e999634e0151-kube-api-access-kjlrd\") pod \"2c37d643-cddf-40c7-ad82-e999634e0151\" (UID: \"2c37d643-cddf-40c7-ad82-e999634e0151\") " Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.384900 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-bundle" (OuterVolumeSpecName: "bundle") pod "2c37d643-cddf-40c7-ad82-e999634e0151" (UID: "2c37d643-cddf-40c7-ad82-e999634e0151"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.389226 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c37d643-cddf-40c7-ad82-e999634e0151-kube-api-access-kjlrd" (OuterVolumeSpecName: "kube-api-access-kjlrd") pod "2c37d643-cddf-40c7-ad82-e999634e0151" (UID: "2c37d643-cddf-40c7-ad82-e999634e0151"). InnerVolumeSpecName "kube-api-access-kjlrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.402634 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-util" (OuterVolumeSpecName: "util") pod "2c37d643-cddf-40c7-ad82-e999634e0151" (UID: "2c37d643-cddf-40c7-ad82-e999634e0151"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.485802 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.485852 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c37d643-cddf-40c7-ad82-e999634e0151-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:54 crc kubenswrapper[4721]: I0128 18:48:54.485909 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjlrd\" (UniqueName: \"kubernetes.io/projected/2c37d643-cddf-40c7-ad82-e999634e0151-kube-api-access-kjlrd\") on node \"crc\" DevicePath \"\"" Jan 28 18:48:55 crc kubenswrapper[4721]: I0128 18:48:55.002126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" event={"ID":"2c37d643-cddf-40c7-ad82-e999634e0151","Type":"ContainerDied","Data":"cf04163390076b49a3eb1b0500449dc846c4711c5f5592f71d68d79cdfd1338f"} Jan 28 18:48:55 crc kubenswrapper[4721]: I0128 18:48:55.002195 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf04163390076b49a3eb1b0500449dc846c4711c5f5592f71d68d79cdfd1338f" Jan 28 18:48:55 crc kubenswrapper[4721]: I0128 18:48:55.002270 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.340399 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9zlq"] Jan 28 18:48:56 crc kubenswrapper[4721]: E0128 18:48:56.341045 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="pull" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.341061 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="pull" Jan 28 18:48:56 crc kubenswrapper[4721]: E0128 18:48:56.341072 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="util" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.341079 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="util" Jan 28 18:48:56 crc kubenswrapper[4721]: E0128 18:48:56.341088 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="extract" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.341096 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="extract" Jan 28 18:48:56 crc kubenswrapper[4721]: E0128 18:48:56.341118 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b4f91f-7c7b-401a-82b0-8907f6880677" containerName="console" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.341126 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b4f91f-7c7b-401a-82b0-8907f6880677" containerName="console" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.341854 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b4f91f-7c7b-401a-82b0-8907f6880677" containerName="console" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.341870 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c37d643-cddf-40c7-ad82-e999634e0151" containerName="extract" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.342754 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.373596 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9zlq"] Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.422094 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4wf\" (UniqueName: \"kubernetes.io/projected/5ae6829d-14f8-4181-b1d1-39778adc7a0e-kube-api-access-ks4wf\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.422145 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-utilities\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.422194 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-catalog-content\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.523474 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4wf\" (UniqueName: \"kubernetes.io/projected/5ae6829d-14f8-4181-b1d1-39778adc7a0e-kube-api-access-ks4wf\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.523564 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-utilities\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.523606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-catalog-content\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.524379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-catalog-content\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.524468 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-utilities\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.549381 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4wf\" (UniqueName: \"kubernetes.io/projected/5ae6829d-14f8-4181-b1d1-39778adc7a0e-kube-api-access-ks4wf\") pod \"certified-operators-v9zlq\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:56 crc kubenswrapper[4721]: I0128 18:48:56.672735 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:48:57 crc kubenswrapper[4721]: I0128 18:48:57.193052 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9zlq"] Jan 28 18:48:57 crc kubenswrapper[4721]: W0128 18:48:57.196610 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae6829d_14f8_4181_b1d1_39778adc7a0e.slice/crio-3dd652c818075e35016ec0359b95d07385631546622bc496b488ff67bf3cd402 WatchSource:0}: Error finding container 3dd652c818075e35016ec0359b95d07385631546622bc496b488ff67bf3cd402: Status 404 returned error can't find the container with id 3dd652c818075e35016ec0359b95d07385631546622bc496b488ff67bf3cd402 Jan 28 18:48:58 crc kubenswrapper[4721]: I0128 18:48:58.023145 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerID="80f8ce8ac2ad558abe8e0bf9f16a9db621584fc33da95fe4225fd23a9438d355" exitCode=0 Jan 28 18:48:58 crc kubenswrapper[4721]: I0128 18:48:58.023220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerDied","Data":"80f8ce8ac2ad558abe8e0bf9f16a9db621584fc33da95fe4225fd23a9438d355"} Jan 28 18:48:58 crc kubenswrapper[4721]: I0128 18:48:58.023508 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerStarted","Data":"3dd652c818075e35016ec0359b95d07385631546622bc496b488ff67bf3cd402"} Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.342104 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cd7ss"] Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.343736 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.357451 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cd7ss"] Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.367616 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-utilities\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.367665 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-catalog-content\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.367783 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g859r\" (UniqueName: \"kubernetes.io/projected/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-kube-api-access-g859r\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.469273 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g859r\" (UniqueName: \"kubernetes.io/projected/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-kube-api-access-g859r\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.469446 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-utilities\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.469483 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-catalog-content\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.470054 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-utilities\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.470238 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-catalog-content\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.499626 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g859r\" (UniqueName: \"kubernetes.io/projected/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-kube-api-access-g859r\") pod \"community-operators-cd7ss\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:48:59 crc kubenswrapper[4721]: I0128 18:48:59.665141 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:00 crc kubenswrapper[4721]: I0128 18:49:00.048010 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerStarted","Data":"752068d496038d5baf31328cdf04957f4c642cfaedf4c78ca5d089429f71d09d"} Jan 28 18:49:00 crc kubenswrapper[4721]: I0128 18:49:00.135614 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cd7ss"] Jan 28 18:49:00 crc kubenswrapper[4721]: W0128 18:49:00.182560 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfd4e46b_ee1a_43e0_a0fc_14513d81daed.slice/crio-a8debf5c189a4949cbc4aabdc75a3aa74403b5a74e4c346ca4e580616eed1d0c WatchSource:0}: Error finding container a8debf5c189a4949cbc4aabdc75a3aa74403b5a74e4c346ca4e580616eed1d0c: Status 404 returned error can't find the container with id a8debf5c189a4949cbc4aabdc75a3aa74403b5a74e4c346ca4e580616eed1d0c Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.055984 4721 generic.go:334] "Generic (PLEG): container finished" podID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerID="a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf" exitCode=0 Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.056050 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerDied","Data":"a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf"} Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.056448 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerStarted","Data":"a8debf5c189a4949cbc4aabdc75a3aa74403b5a74e4c346ca4e580616eed1d0c"} Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.058856 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerID="752068d496038d5baf31328cdf04957f4c642cfaedf4c78ca5d089429f71d09d" exitCode=0 Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.058897 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerDied","Data":"752068d496038d5baf31328cdf04957f4c642cfaedf4c78ca5d089429f71d09d"} Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.230655 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:49:01 crc kubenswrapper[4721]: I0128 18:49:01.230744 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.067615 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerStarted","Data":"ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5"} Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.070431 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerStarted","Data":"87516e6d781caa37bc3f5d1c86b6d4e5bb699441419d2caf20833fa45ec5ac6d"} Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.108777 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9zlq" podStartSLOduration=2.617675688 podStartE2EDuration="6.108759987s" podCreationTimestamp="2026-01-28 18:48:56 +0000 UTC" firstStartedPulling="2026-01-28 18:48:58.024504958 +0000 UTC m=+903.749810518" lastFinishedPulling="2026-01-28 18:49:01.515589257 +0000 UTC m=+907.240894817" observedRunningTime="2026-01-28 18:49:02.107720654 +0000 UTC m=+907.833026224" watchObservedRunningTime="2026-01-28 18:49:02.108759987 +0000 UTC m=+907.834065547" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.334400 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvl9n"] Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.336989 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.351845 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvl9n"] Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.522221 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-utilities\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.522303 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-catalog-content\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.522342 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4b7s\" (UniqueName: \"kubernetes.io/projected/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-kube-api-access-d4b7s\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.624011 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-catalog-content\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.624096 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4b7s\" (UniqueName: \"kubernetes.io/projected/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-kube-api-access-d4b7s\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.624253 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-utilities\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.624691 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-catalog-content\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.624723 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-utilities\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.648245 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4b7s\" (UniqueName: \"kubernetes.io/projected/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-kube-api-access-d4b7s\") pod \"redhat-marketplace-pvl9n\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:02 crc kubenswrapper[4721]: I0128 18:49:02.696932 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.004780 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvl9n"] Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.087164 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvl9n" event={"ID":"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4","Type":"ContainerStarted","Data":"aec07df1487c5b3a2f9fad69151faf33e3470a1f5331d0fdc429d57a38277ae2"} Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.091980 4721 generic.go:334] "Generic (PLEG): container finished" podID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerID="ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5" exitCode=0 Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.092155 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerDied","Data":"ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5"} Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.689792 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t"] Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.691528 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.695767 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.696951 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.697108 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.697260 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.697632 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wstjz" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.787904 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t"] Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.841584 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tk4\" (UniqueName: \"kubernetes.io/projected/fbde7afa-5af9-462b-b402-352513fb9655-kube-api-access-77tk4\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.841643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fbde7afa-5af9-462b-b402-352513fb9655-webhook-cert\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.841703 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fbde7afa-5af9-462b-b402-352513fb9655-apiservice-cert\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.943907 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fbde7afa-5af9-462b-b402-352513fb9655-apiservice-cert\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.943994 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tk4\" (UniqueName: \"kubernetes.io/projected/fbde7afa-5af9-462b-b402-352513fb9655-kube-api-access-77tk4\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.944019 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fbde7afa-5af9-462b-b402-352513fb9655-webhook-cert\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.954229 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fbde7afa-5af9-462b-b402-352513fb9655-webhook-cert\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.966512 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fbde7afa-5af9-462b-b402-352513fb9655-apiservice-cert\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:03 crc kubenswrapper[4721]: I0128 18:49:03.976279 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tk4\" (UniqueName: \"kubernetes.io/projected/fbde7afa-5af9-462b-b402-352513fb9655-kube-api-access-77tk4\") pod \"metallb-operator-controller-manager-79d44b6d7b-q852t\" (UID: \"fbde7afa-5af9-462b-b402-352513fb9655\") " pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.012698 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.110829 4721 generic.go:334] "Generic (PLEG): container finished" podID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerID="7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274" exitCode=0 Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.110951 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvl9n" event={"ID":"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4","Type":"ContainerDied","Data":"7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274"} Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.137216 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerStarted","Data":"ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68"} Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.172034 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc"] Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.174702 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.182205 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.182309 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.182446 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ks7np" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.197328 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cd7ss" podStartSLOduration=2.603365398 podStartE2EDuration="5.19730849s" podCreationTimestamp="2026-01-28 18:48:59 +0000 UTC" firstStartedPulling="2026-01-28 18:49:01.057535443 +0000 UTC m=+906.782841003" lastFinishedPulling="2026-01-28 18:49:03.651478525 +0000 UTC m=+909.376784095" observedRunningTime="2026-01-28 18:49:04.19573714 +0000 UTC m=+909.921042720" watchObservedRunningTime="2026-01-28 18:49:04.19730849 +0000 UTC m=+909.922614050" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.204189 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc"] Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.250481 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/690709f2-5507-45e6-8897-380890c19e6f-apiservice-cert\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.250821 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/690709f2-5507-45e6-8897-380890c19e6f-webhook-cert\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.250925 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gcbs\" (UniqueName: \"kubernetes.io/projected/690709f2-5507-45e6-8897-380890c19e6f-kube-api-access-8gcbs\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.352807 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gcbs\" (UniqueName: \"kubernetes.io/projected/690709f2-5507-45e6-8897-380890c19e6f-kube-api-access-8gcbs\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.352891 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/690709f2-5507-45e6-8897-380890c19e6f-apiservice-cert\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.352939 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/690709f2-5507-45e6-8897-380890c19e6f-webhook-cert\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.358204 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/690709f2-5507-45e6-8897-380890c19e6f-apiservice-cert\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.358242 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/690709f2-5507-45e6-8897-380890c19e6f-webhook-cert\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.369477 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gcbs\" (UniqueName: \"kubernetes.io/projected/690709f2-5507-45e6-8897-380890c19e6f-kube-api-access-8gcbs\") pod \"metallb-operator-webhook-server-7689b8f645-b5mcc\" (UID: \"690709f2-5507-45e6-8897-380890c19e6f\") " pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.526512 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.587210 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t"] Jan 28 18:49:04 crc kubenswrapper[4721]: I0128 18:49:04.895718 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc"] Jan 28 18:49:04 crc kubenswrapper[4721]: W0128 18:49:04.906394 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod690709f2_5507_45e6_8897_380890c19e6f.slice/crio-c3fa8b16266ed99711958544394dcba54e32c46b6fac5514b445c39ea38e79d7 WatchSource:0}: Error finding container c3fa8b16266ed99711958544394dcba54e32c46b6fac5514b445c39ea38e79d7: Status 404 returned error can't find the container with id c3fa8b16266ed99711958544394dcba54e32c46b6fac5514b445c39ea38e79d7 Jan 28 18:49:05 crc kubenswrapper[4721]: I0128 18:49:05.145923 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" event={"ID":"fbde7afa-5af9-462b-b402-352513fb9655","Type":"ContainerStarted","Data":"65fef81801c961d1f088a39dd7f616c7a0c48d17a87e866af160423e45c0fe11"} Jan 28 18:49:05 crc kubenswrapper[4721]: I0128 18:49:05.147391 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" event={"ID":"690709f2-5507-45e6-8897-380890c19e6f","Type":"ContainerStarted","Data":"c3fa8b16266ed99711958544394dcba54e32c46b6fac5514b445c39ea38e79d7"} Jan 28 18:49:06 crc kubenswrapper[4721]: I0128 18:49:06.158951 4721 generic.go:334] "Generic (PLEG): container finished" podID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerID="aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240" exitCode=0 Jan 28 18:49:06 crc kubenswrapper[4721]: I0128 18:49:06.159014 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvl9n" event={"ID":"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4","Type":"ContainerDied","Data":"aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240"} Jan 28 18:49:06 crc kubenswrapper[4721]: I0128 18:49:06.673730 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:49:06 crc kubenswrapper[4721]: I0128 18:49:06.675021 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:49:06 crc kubenswrapper[4721]: I0128 18:49:06.749334 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:49:07 crc kubenswrapper[4721]: I0128 18:49:07.175526 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvl9n" event={"ID":"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4","Type":"ContainerStarted","Data":"83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62"} Jan 28 18:49:07 crc kubenswrapper[4721]: I0128 18:49:07.218249 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pvl9n" podStartSLOduration=2.77395028 podStartE2EDuration="5.218229248s" podCreationTimestamp="2026-01-28 18:49:02 +0000 UTC" firstStartedPulling="2026-01-28 18:49:04.115098593 +0000 UTC m=+909.840404153" lastFinishedPulling="2026-01-28 18:49:06.559377571 +0000 UTC m=+912.284683121" observedRunningTime="2026-01-28 18:49:07.21637453 +0000 UTC m=+912.941680110" watchObservedRunningTime="2026-01-28 18:49:07.218229248 +0000 UTC m=+912.943534808" Jan 28 18:49:07 crc kubenswrapper[4721]: I0128 18:49:07.267936 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:49:09 crc kubenswrapper[4721]: I0128 18:49:09.665783 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:09 crc kubenswrapper[4721]: I0128 18:49:09.666266 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:09 crc kubenswrapper[4721]: I0128 18:49:09.717142 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:10 crc kubenswrapper[4721]: I0128 18:49:10.262656 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.223649 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" event={"ID":"690709f2-5507-45e6-8897-380890c19e6f","Type":"ContainerStarted","Data":"071100404fe9ea4af47eb1316821a8f6b6bef3ce3a052cc9f5129c4b6572f441"} Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.224227 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.225389 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" event={"ID":"fbde7afa-5af9-462b-b402-352513fb9655","Type":"ContainerStarted","Data":"9733f863253c905105515c1ee7094ccca6aa2570c809832f37bcd96378359afd"} Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.225550 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.242112 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" podStartSLOduration=1.19425862 podStartE2EDuration="8.242093374s" podCreationTimestamp="2026-01-28 18:49:04 +0000 UTC" firstStartedPulling="2026-01-28 18:49:04.910628823 +0000 UTC m=+910.635934383" lastFinishedPulling="2026-01-28 18:49:11.958463577 +0000 UTC m=+917.683769137" observedRunningTime="2026-01-28 18:49:12.241220728 +0000 UTC m=+917.966526288" watchObservedRunningTime="2026-01-28 18:49:12.242093374 +0000 UTC m=+917.967398924" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.270195 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" podStartSLOduration=2.016284074 podStartE2EDuration="9.270152164s" podCreationTimestamp="2026-01-28 18:49:03 +0000 UTC" firstStartedPulling="2026-01-28 18:49:04.635950015 +0000 UTC m=+910.361255575" lastFinishedPulling="2026-01-28 18:49:11.889818105 +0000 UTC m=+917.615123665" observedRunningTime="2026-01-28 18:49:12.261035068 +0000 UTC m=+917.986340628" watchObservedRunningTime="2026-01-28 18:49:12.270152164 +0000 UTC m=+917.995457724" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.334237 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9zlq"] Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.334990 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9zlq" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="registry-server" containerID="cri-o://87516e6d781caa37bc3f5d1c86b6d4e5bb699441419d2caf20833fa45ec5ac6d" gracePeriod=2 Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.698041 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.698212 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:12 crc kubenswrapper[4721]: I0128 18:49:12.744721 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.234424 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerID="87516e6d781caa37bc3f5d1c86b6d4e5bb699441419d2caf20833fa45ec5ac6d" exitCode=0 Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.234556 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerDied","Data":"87516e6d781caa37bc3f5d1c86b6d4e5bb699441419d2caf20833fa45ec5ac6d"} Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.279336 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.818392 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.918875 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-utilities\") pod \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.918996 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-catalog-content\") pod \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.919029 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks4wf\" (UniqueName: \"kubernetes.io/projected/5ae6829d-14f8-4181-b1d1-39778adc7a0e-kube-api-access-ks4wf\") pod \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\" (UID: \"5ae6829d-14f8-4181-b1d1-39778adc7a0e\") " Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.919662 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-utilities" (OuterVolumeSpecName: "utilities") pod "5ae6829d-14f8-4181-b1d1-39778adc7a0e" (UID: "5ae6829d-14f8-4181-b1d1-39778adc7a0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.925683 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae6829d-14f8-4181-b1d1-39778adc7a0e-kube-api-access-ks4wf" (OuterVolumeSpecName: "kube-api-access-ks4wf") pod "5ae6829d-14f8-4181-b1d1-39778adc7a0e" (UID: "5ae6829d-14f8-4181-b1d1-39778adc7a0e"). InnerVolumeSpecName "kube-api-access-ks4wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:49:13 crc kubenswrapper[4721]: I0128 18:49:13.968992 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ae6829d-14f8-4181-b1d1-39778adc7a0e" (UID: "5ae6829d-14f8-4181-b1d1-39778adc7a0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.020742 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.020804 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae6829d-14f8-4181-b1d1-39778adc7a0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.020819 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks4wf\" (UniqueName: \"kubernetes.io/projected/5ae6829d-14f8-4181-b1d1-39778adc7a0e-kube-api-access-ks4wf\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.243290 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zlq" event={"ID":"5ae6829d-14f8-4181-b1d1-39778adc7a0e","Type":"ContainerDied","Data":"3dd652c818075e35016ec0359b95d07385631546622bc496b488ff67bf3cd402"} Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.243364 4721 scope.go:117] "RemoveContainer" containerID="87516e6d781caa37bc3f5d1c86b6d4e5bb699441419d2caf20833fa45ec5ac6d" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.243322 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zlq" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.270209 4721 scope.go:117] "RemoveContainer" containerID="752068d496038d5baf31328cdf04957f4c642cfaedf4c78ca5d089429f71d09d" Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.271412 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9zlq"] Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.280698 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9zlq"] Jan 28 18:49:14 crc kubenswrapper[4721]: I0128 18:49:14.306228 4721 scope.go:117] "RemoveContainer" containerID="80f8ce8ac2ad558abe8e0bf9f16a9db621584fc33da95fe4225fd23a9438d355" Jan 28 18:49:15 crc kubenswrapper[4721]: I0128 18:49:15.536746 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" path="/var/lib/kubelet/pods/5ae6829d-14f8-4181-b1d1-39778adc7a0e/volumes" Jan 28 18:49:15 crc kubenswrapper[4721]: I0128 18:49:15.537645 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cd7ss"] Jan 28 18:49:15 crc kubenswrapper[4721]: I0128 18:49:15.537860 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cd7ss" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="registry-server" containerID="cri-o://ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68" gracePeriod=2 Jan 28 18:49:15 crc kubenswrapper[4721]: I0128 18:49:15.963162 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.065688 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g859r\" (UniqueName: \"kubernetes.io/projected/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-kube-api-access-g859r\") pod \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.065859 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-catalog-content\") pod \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.065964 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-utilities\") pod \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\" (UID: \"cfd4e46b-ee1a-43e0-a0fc-14513d81daed\") " Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.067072 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-utilities" (OuterVolumeSpecName: "utilities") pod "cfd4e46b-ee1a-43e0-a0fc-14513d81daed" (UID: "cfd4e46b-ee1a-43e0-a0fc-14513d81daed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.074511 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-kube-api-access-g859r" (OuterVolumeSpecName: "kube-api-access-g859r") pod "cfd4e46b-ee1a-43e0-a0fc-14513d81daed" (UID: "cfd4e46b-ee1a-43e0-a0fc-14513d81daed"). InnerVolumeSpecName "kube-api-access-g859r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.120436 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfd4e46b-ee1a-43e0-a0fc-14513d81daed" (UID: "cfd4e46b-ee1a-43e0-a0fc-14513d81daed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.168188 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.168250 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.168263 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g859r\" (UniqueName: \"kubernetes.io/projected/cfd4e46b-ee1a-43e0-a0fc-14513d81daed-kube-api-access-g859r\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.261015 4721 generic.go:334] "Generic (PLEG): container finished" podID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerID="ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68" exitCode=0 Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.261072 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerDied","Data":"ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68"} Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.261112 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cd7ss" event={"ID":"cfd4e46b-ee1a-43e0-a0fc-14513d81daed","Type":"ContainerDied","Data":"a8debf5c189a4949cbc4aabdc75a3aa74403b5a74e4c346ca4e580616eed1d0c"} Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.261111 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cd7ss" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.261131 4721 scope.go:117] "RemoveContainer" containerID="ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.282923 4721 scope.go:117] "RemoveContainer" containerID="ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.299880 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cd7ss"] Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.312194 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cd7ss"] Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.316588 4721 scope.go:117] "RemoveContainer" containerID="a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.333586 4721 scope.go:117] "RemoveContainer" containerID="ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68" Jan 28 18:49:16 crc kubenswrapper[4721]: E0128 18:49:16.333946 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68\": container with ID starting with ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68 not found: ID does not exist" containerID="ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.334024 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68"} err="failed to get container status \"ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68\": rpc error: code = NotFound desc = could not find container \"ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68\": container with ID starting with ecf68eee0ac402399beca0928cba5b06446aee57f4c3a3842ee6ea4664db0e68 not found: ID does not exist" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.334066 4721 scope.go:117] "RemoveContainer" containerID="ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5" Jan 28 18:49:16 crc kubenswrapper[4721]: E0128 18:49:16.334330 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5\": container with ID starting with ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5 not found: ID does not exist" containerID="ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.334388 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5"} err="failed to get container status \"ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5\": rpc error: code = NotFound desc = could not find container \"ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5\": container with ID starting with ba7263f955049d2f856ccc2fa4139f505a8d8e545119aa7e13a847f76f8016e5 not found: ID does not exist" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.334424 4721 scope.go:117] "RemoveContainer" containerID="a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf" Jan 28 18:49:16 crc kubenswrapper[4721]: E0128 18:49:16.334803 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf\": container with ID starting with a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf not found: ID does not exist" containerID="a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf" Jan 28 18:49:16 crc kubenswrapper[4721]: I0128 18:49:16.334832 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf"} err="failed to get container status \"a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf\": rpc error: code = NotFound desc = could not find container \"a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf\": container with ID starting with a035cdf9da75f95331bda131f99f8ecec4f31df64411e4f7853e19cb1d300aaf not found: ID does not exist" Jan 28 18:49:17 crc kubenswrapper[4721]: I0128 18:49:17.536768 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" path="/var/lib/kubelet/pods/cfd4e46b-ee1a-43e0-a0fc-14513d81daed/volumes" Jan 28 18:49:19 crc kubenswrapper[4721]: I0128 18:49:19.127939 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvl9n"] Jan 28 18:49:19 crc kubenswrapper[4721]: I0128 18:49:19.128368 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvl9n" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="registry-server" containerID="cri-o://83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62" gracePeriod=2 Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.089061 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.223590 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-catalog-content\") pod \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.223647 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-utilities\") pod \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.223680 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4b7s\" (UniqueName: \"kubernetes.io/projected/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-kube-api-access-d4b7s\") pod \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\" (UID: \"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4\") " Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.224616 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-utilities" (OuterVolumeSpecName: "utilities") pod "0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" (UID: "0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.225480 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.229878 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-kube-api-access-d4b7s" (OuterVolumeSpecName: "kube-api-access-d4b7s") pod "0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" (UID: "0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4"). InnerVolumeSpecName "kube-api-access-d4b7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.246133 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" (UID: "0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.291986 4721 generic.go:334] "Generic (PLEG): container finished" podID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerID="83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62" exitCode=0 Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.292032 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvl9n" event={"ID":"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4","Type":"ContainerDied","Data":"83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62"} Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.292061 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvl9n" event={"ID":"0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4","Type":"ContainerDied","Data":"aec07df1487c5b3a2f9fad69151faf33e3470a1f5331d0fdc429d57a38277ae2"} Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.292078 4721 scope.go:117] "RemoveContainer" containerID="83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.292252 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvl9n" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.313937 4721 scope.go:117] "RemoveContainer" containerID="aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.325423 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvl9n"] Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.326535 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.326579 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4b7s\" (UniqueName: \"kubernetes.io/projected/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4-kube-api-access-d4b7s\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.334222 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvl9n"] Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.353034 4721 scope.go:117] "RemoveContainer" containerID="7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.378065 4721 scope.go:117] "RemoveContainer" containerID="83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62" Jan 28 18:49:20 crc kubenswrapper[4721]: E0128 18:49:20.378833 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62\": container with ID starting with 83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62 not found: ID does not exist" containerID="83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.378875 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62"} err="failed to get container status \"83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62\": rpc error: code = NotFound desc = could not find container \"83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62\": container with ID starting with 83e1cf2fc365af0149ee40b134bbc7448a592dac40c26693020f057cbf575d62 not found: ID does not exist" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.378902 4721 scope.go:117] "RemoveContainer" containerID="aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240" Jan 28 18:49:20 crc kubenswrapper[4721]: E0128 18:49:20.379352 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240\": container with ID starting with aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240 not found: ID does not exist" containerID="aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.379379 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240"} err="failed to get container status \"aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240\": rpc error: code = NotFound desc = could not find container \"aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240\": container with ID starting with aabe6c7d1929c42ae38c60d0426d3aca7eda8c501e8e7f16efb24507bb496240 not found: ID does not exist" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.379394 4721 scope.go:117] "RemoveContainer" containerID="7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274" Jan 28 18:49:20 crc kubenswrapper[4721]: E0128 18:49:20.379667 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274\": container with ID starting with 7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274 not found: ID does not exist" containerID="7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274" Jan 28 18:49:20 crc kubenswrapper[4721]: I0128 18:49:20.379688 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274"} err="failed to get container status \"7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274\": rpc error: code = NotFound desc = could not find container \"7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274\": container with ID starting with 7ff939b339b791bde948c87d534430efc9510ad5600b2927744914daa6a9b274 not found: ID does not exist" Jan 28 18:49:21 crc kubenswrapper[4721]: I0128 18:49:21.537663 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" path="/var/lib/kubelet/pods/0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4/volumes" Jan 28 18:49:24 crc kubenswrapper[4721]: I0128 18:49:24.531083 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7689b8f645-b5mcc" Jan 28 18:49:31 crc kubenswrapper[4721]: I0128 18:49:31.224833 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:49:31 crc kubenswrapper[4721]: I0128 18:49:31.225502 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.016080 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-79d44b6d7b-q852t" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763432 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-94kms"] Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763725 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="extract-utilities" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763740 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="extract-utilities" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763753 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="extract-content" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763759 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="extract-content" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763769 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="extract-content" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763778 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="extract-content" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763797 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="extract-utilities" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763804 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="extract-utilities" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763816 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763824 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763833 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763841 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763854 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763863 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763874 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="extract-utilities" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763883 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="extract-utilities" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.763893 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="extract-content" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.763901 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="extract-content" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.764039 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0802c6f3-4d8d-4a8a-a3fb-a9bb1438e3d4" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.764056 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae6829d-14f8-4181-b1d1-39778adc7a0e" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.764068 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd4e46b-ee1a-43e0-a0fc-14513d81daed" containerName="registry-server" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.766567 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.769582 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wwvs9" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.769916 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.770098 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.775391 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd"] Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.781878 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.784556 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.831122 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd"] Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.882632 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/514e6881-7399-4848-bb65-7851e1e3b079-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9xvzd\" (UID: \"514e6881-7399-4848-bb65-7851e1e3b079\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883000 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883029 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8l57\" (UniqueName: \"kubernetes.io/projected/514e6881-7399-4848-bb65-7851e1e3b079-kube-api-access-m8l57\") pod \"frr-k8s-webhook-server-7df86c4f6c-9xvzd\" (UID: \"514e6881-7399-4848-bb65-7851e1e3b079\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883083 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-conf\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883123 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdvw\" (UniqueName: \"kubernetes.io/projected/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-kube-api-access-btdvw\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883146 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-startup\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883199 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics-certs\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883222 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-reloader\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.883251 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-sockets\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.898991 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k5dbx"] Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.900238 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k5dbx" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.905475 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.905788 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.906842 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.907002 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-r84vh" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.910643 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7rcs7"] Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.911755 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.917625 4721 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.921591 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7rcs7"] Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.984541 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics-certs\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.984593 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-reloader\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.984638 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.985047 4721 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 28 18:49:44 crc kubenswrapper[4721]: E0128 18:49:44.985136 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics-certs podName:ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb nodeName:}" failed. No retries permitted until 2026-01-28 18:49:45.485114102 +0000 UTC m=+951.210419722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics-certs") pod "frr-k8s-94kms" (UID: "ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb") : secret "frr-k8s-certs-secret" not found Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985262 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-sockets\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985366 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4d13a423-7c09-4fae-b239-e376e8487d85-metallb-excludel2\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985424 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-reloader\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985493 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/514e6881-7399-4848-bb65-7851e1e3b079-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9xvzd\" (UID: \"514e6881-7399-4848-bb65-7851e1e3b079\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985519 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8l57\" (UniqueName: \"kubernetes.io/projected/514e6881-7399-4848-bb65-7851e1e3b079-kube-api-access-m8l57\") pod \"frr-k8s-webhook-server-7df86c4f6c-9xvzd\" (UID: \"514e6881-7399-4848-bb65-7851e1e3b079\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985519 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-sockets\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985557 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpw8d\" (UniqueName: \"kubernetes.io/projected/4d13a423-7c09-4fae-b239-e376e8487d85-kube-api-access-tpw8d\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985725 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985732 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-conf\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985832 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btdvw\" (UniqueName: \"kubernetes.io/projected/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-kube-api-access-btdvw\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985871 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-startup\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985913 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-metrics-certs\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.985999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-conf\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:44 crc kubenswrapper[4721]: I0128 18:49:44.987183 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-frr-startup\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.016293 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8l57\" (UniqueName: \"kubernetes.io/projected/514e6881-7399-4848-bb65-7851e1e3b079-kube-api-access-m8l57\") pod \"frr-k8s-webhook-server-7df86c4f6c-9xvzd\" (UID: \"514e6881-7399-4848-bb65-7851e1e3b079\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.016963 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdvw\" (UniqueName: \"kubernetes.io/projected/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-kube-api-access-btdvw\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.018290 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/514e6881-7399-4848-bb65-7851e1e3b079-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9xvzd\" (UID: \"514e6881-7399-4848-bb65-7851e1e3b079\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpw8d\" (UniqueName: \"kubernetes.io/projected/4d13a423-7c09-4fae-b239-e376e8487d85-kube-api-access-tpw8d\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087251 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c251c48b-fe6b-484b-9ff7-60faab8d13b5-metrics-certs\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087296 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c251c48b-fe6b-484b-9ff7-60faab8d13b5-cert\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087324 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpvw\" (UniqueName: \"kubernetes.io/projected/c251c48b-fe6b-484b-9ff7-60faab8d13b5-kube-api-access-2gpvw\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087369 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-metrics-certs\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087417 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.087444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4d13a423-7c09-4fae-b239-e376e8487d85-metallb-excludel2\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: E0128 18:49:45.088218 4721 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.088230 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4d13a423-7c09-4fae-b239-e376e8487d85-metallb-excludel2\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: E0128 18:49:45.088274 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist podName:4d13a423-7c09-4fae-b239-e376e8487d85 nodeName:}" failed. No retries permitted until 2026-01-28 18:49:45.588260185 +0000 UTC m=+951.313565755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist") pod "speaker-k5dbx" (UID: "4d13a423-7c09-4fae-b239-e376e8487d85") : secret "metallb-memberlist" not found Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.102783 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-metrics-certs\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.122574 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.124935 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpw8d\" (UniqueName: \"kubernetes.io/projected/4d13a423-7c09-4fae-b239-e376e8487d85-kube-api-access-tpw8d\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.189085 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c251c48b-fe6b-484b-9ff7-60faab8d13b5-cert\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.189146 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gpvw\" (UniqueName: \"kubernetes.io/projected/c251c48b-fe6b-484b-9ff7-60faab8d13b5-kube-api-access-2gpvw\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.189284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c251c48b-fe6b-484b-9ff7-60faab8d13b5-metrics-certs\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.193608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c251c48b-fe6b-484b-9ff7-60faab8d13b5-cert\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.194278 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c251c48b-fe6b-484b-9ff7-60faab8d13b5-metrics-certs\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.214094 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gpvw\" (UniqueName: \"kubernetes.io/projected/c251c48b-fe6b-484b-9ff7-60faab8d13b5-kube-api-access-2gpvw\") pod \"controller-6968d8fdc4-7rcs7\" (UID: \"c251c48b-fe6b-484b-9ff7-60faab8d13b5\") " pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.234738 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.492887 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics-certs\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.498150 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb-metrics-certs\") pod \"frr-k8s-94kms\" (UID: \"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb\") " pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.575844 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd"] Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.595079 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:45 crc kubenswrapper[4721]: E0128 18:49:45.596020 4721 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 18:49:45 crc kubenswrapper[4721]: E0128 18:49:45.596162 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist podName:4d13a423-7c09-4fae-b239-e376e8487d85 nodeName:}" failed. No retries permitted until 2026-01-28 18:49:46.5961306 +0000 UTC m=+952.321436160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist") pod "speaker-k5dbx" (UID: "4d13a423-7c09-4fae-b239-e376e8487d85") : secret "metallb-memberlist" not found Jan 28 18:49:45 crc kubenswrapper[4721]: W0128 18:49:45.686355 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc251c48b_fe6b_484b_9ff7_60faab8d13b5.slice/crio-34f5a0dae266095c368303f583efb862f35bd85a4c611c239cc9962ab418bb66 WatchSource:0}: Error finding container 34f5a0dae266095c368303f583efb862f35bd85a4c611c239cc9962ab418bb66: Status 404 returned error can't find the container with id 34f5a0dae266095c368303f583efb862f35bd85a4c611c239cc9962ab418bb66 Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.686494 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7rcs7"] Jan 28 18:49:45 crc kubenswrapper[4721]: I0128 18:49:45.693518 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-94kms" Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.490305 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7rcs7" event={"ID":"c251c48b-fe6b-484b-9ff7-60faab8d13b5","Type":"ContainerStarted","Data":"e9dee53f2436fa71c48327114bc396043249ff296a1b8a905727f8ea4cdaaac7"} Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.490692 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7rcs7" event={"ID":"c251c48b-fe6b-484b-9ff7-60faab8d13b5","Type":"ContainerStarted","Data":"124c08e44bf5d9978def159f8cdb152ba73ac0850fbff99b34e79c4dff8e59f2"} Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.490708 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7rcs7" event={"ID":"c251c48b-fe6b-484b-9ff7-60faab8d13b5","Type":"ContainerStarted","Data":"34f5a0dae266095c368303f583efb862f35bd85a4c611c239cc9962ab418bb66"} Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.490728 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.491720 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" event={"ID":"514e6881-7399-4848-bb65-7851e1e3b079","Type":"ContainerStarted","Data":"294ec7c00b098c34e8f91fbde155da1b4963c89c36e16775805efb5a9957319e"} Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.493099 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"02e7dc9c9b5ce23eec5eeac7991274b24f1b1a22509267c7995bc253f35ab9ef"} Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.515122 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7rcs7" podStartSLOduration=2.515099019 podStartE2EDuration="2.515099019s" podCreationTimestamp="2026-01-28 18:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:49:46.507804421 +0000 UTC m=+952.233109991" watchObservedRunningTime="2026-01-28 18:49:46.515099019 +0000 UTC m=+952.240404579" Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.609798 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.613752 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4d13a423-7c09-4fae-b239-e376e8487d85-memberlist\") pod \"speaker-k5dbx\" (UID: \"4d13a423-7c09-4fae-b239-e376e8487d85\") " pod="metallb-system/speaker-k5dbx" Jan 28 18:49:46 crc kubenswrapper[4721]: I0128 18:49:46.725418 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k5dbx" Jan 28 18:49:46 crc kubenswrapper[4721]: W0128 18:49:46.747592 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d13a423_7c09_4fae_b239_e376e8487d85.slice/crio-222e64f342cb9109e2fe06827f5ef7217370bdf5d66f1adfd9508c3a9c3fc998 WatchSource:0}: Error finding container 222e64f342cb9109e2fe06827f5ef7217370bdf5d66f1adfd9508c3a9c3fc998: Status 404 returned error can't find the container with id 222e64f342cb9109e2fe06827f5ef7217370bdf5d66f1adfd9508c3a9c3fc998 Jan 28 18:49:47 crc kubenswrapper[4721]: I0128 18:49:47.506484 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k5dbx" event={"ID":"4d13a423-7c09-4fae-b239-e376e8487d85","Type":"ContainerStarted","Data":"01938c87883dd3dd070956b74091d39cff9a927dee79bbd185fc7db99135ebde"} Jan 28 18:49:47 crc kubenswrapper[4721]: I0128 18:49:47.506895 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k5dbx" event={"ID":"4d13a423-7c09-4fae-b239-e376e8487d85","Type":"ContainerStarted","Data":"dcecda29a3310a34aa381c1db92c7b10e3a9e271b3a0f5618fd7592835c27ab9"} Jan 28 18:49:47 crc kubenswrapper[4721]: I0128 18:49:47.506914 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k5dbx" event={"ID":"4d13a423-7c09-4fae-b239-e376e8487d85","Type":"ContainerStarted","Data":"222e64f342cb9109e2fe06827f5ef7217370bdf5d66f1adfd9508c3a9c3fc998"} Jan 28 18:49:47 crc kubenswrapper[4721]: I0128 18:49:47.507132 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k5dbx" Jan 28 18:49:47 crc kubenswrapper[4721]: I0128 18:49:47.527698 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k5dbx" podStartSLOduration=3.52767116 podStartE2EDuration="3.52767116s" podCreationTimestamp="2026-01-28 18:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:49:47.526316268 +0000 UTC m=+953.251621828" watchObservedRunningTime="2026-01-28 18:49:47.52767116 +0000 UTC m=+953.252976720" Jan 28 18:49:55 crc kubenswrapper[4721]: I0128 18:49:55.239053 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7rcs7" Jan 28 18:49:55 crc kubenswrapper[4721]: I0128 18:49:55.592901 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" event={"ID":"514e6881-7399-4848-bb65-7851e1e3b079","Type":"ContainerStarted","Data":"595584a16cff85608568504cbfe0776531a3b865c7dc2cc4afc6b87072f374b3"} Jan 28 18:49:55 crc kubenswrapper[4721]: I0128 18:49:55.593565 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:49:55 crc kubenswrapper[4721]: I0128 18:49:55.596804 4721 generic.go:334] "Generic (PLEG): container finished" podID="ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb" containerID="f33ab0816c811c33c67ad2df8fc34192305a8a54adee7c103851424ee0e085fc" exitCode=0 Jan 28 18:49:55 crc kubenswrapper[4721]: I0128 18:49:55.596882 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerDied","Data":"f33ab0816c811c33c67ad2df8fc34192305a8a54adee7c103851424ee0e085fc"} Jan 28 18:49:55 crc kubenswrapper[4721]: I0128 18:49:55.627593 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" podStartSLOduration=2.752066204 podStartE2EDuration="11.62720981s" podCreationTimestamp="2026-01-28 18:49:44 +0000 UTC" firstStartedPulling="2026-01-28 18:49:45.581411789 +0000 UTC m=+951.306717349" lastFinishedPulling="2026-01-28 18:49:54.456555395 +0000 UTC m=+960.181860955" observedRunningTime="2026-01-28 18:49:55.618977802 +0000 UTC m=+961.344283362" watchObservedRunningTime="2026-01-28 18:49:55.62720981 +0000 UTC m=+961.352515550" Jan 28 18:49:56 crc kubenswrapper[4721]: I0128 18:49:56.607443 4721 generic.go:334] "Generic (PLEG): container finished" podID="ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb" containerID="28467ab5802adee5356068cb7dc60c8194778b96450d477a395de8b993e4e519" exitCode=0 Jan 28 18:49:56 crc kubenswrapper[4721]: I0128 18:49:56.607547 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerDied","Data":"28467ab5802adee5356068cb7dc60c8194778b96450d477a395de8b993e4e519"} Jan 28 18:49:57 crc kubenswrapper[4721]: I0128 18:49:57.617770 4721 generic.go:334] "Generic (PLEG): container finished" podID="ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb" containerID="429b84dcc93f46b607feab8e4bcb82643178a02e98fcf930694eb9a09180d1de" exitCode=0 Jan 28 18:49:57 crc kubenswrapper[4721]: I0128 18:49:57.617836 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerDied","Data":"429b84dcc93f46b607feab8e4bcb82643178a02e98fcf930694eb9a09180d1de"} Jan 28 18:49:58 crc kubenswrapper[4721]: I0128 18:49:58.628303 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"e5f894670b0ee6f2378be88f7687ea7d359cdfaa13fe8b2ffdfaccbc479092f3"} Jan 28 18:49:58 crc kubenswrapper[4721]: I0128 18:49:58.628848 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"d2d20303cc4703ff3ae16cf93a9051bb662fb3924471aba4b9a6dcb72ae8e126"} Jan 28 18:49:59 crc kubenswrapper[4721]: I0128 18:49:59.637316 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"e24a5c2e8319fa84be4118d19563b27d8cbf467435d5e1611541f999266bf1c5"} Jan 28 18:49:59 crc kubenswrapper[4721]: I0128 18:49:59.637720 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"c9ce7b28e4eaec3b76bc836c9fd63f1ca7d333690535859cf0d77b6577fa51d5"} Jan 28 18:50:00 crc kubenswrapper[4721]: I0128 18:50:00.648950 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"2507c0b05d062cf9bebc7d950a2f192e3516ca135e9d399460267d4bbf75619d"} Jan 28 18:50:00 crc kubenswrapper[4721]: I0128 18:50:00.649003 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-94kms" event={"ID":"ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb","Type":"ContainerStarted","Data":"1db1ea9edc76f8ae7667cfaf13be5f36ecb7703007e3eb17084f413882a8d527"} Jan 28 18:50:00 crc kubenswrapper[4721]: I0128 18:50:00.649268 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-94kms" Jan 28 18:50:00 crc kubenswrapper[4721]: I0128 18:50:00.681970 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-94kms" podStartSLOduration=8.02035965 podStartE2EDuration="16.681951934s" podCreationTimestamp="2026-01-28 18:49:44 +0000 UTC" firstStartedPulling="2026-01-28 18:49:45.831490586 +0000 UTC m=+951.556796146" lastFinishedPulling="2026-01-28 18:49:54.49308287 +0000 UTC m=+960.218388430" observedRunningTime="2026-01-28 18:50:00.674845021 +0000 UTC m=+966.400150591" watchObservedRunningTime="2026-01-28 18:50:00.681951934 +0000 UTC m=+966.407257494" Jan 28 18:50:00 crc kubenswrapper[4721]: I0128 18:50:00.694396 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-94kms" Jan 28 18:50:00 crc kubenswrapper[4721]: I0128 18:50:00.737355 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-94kms" Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.225107 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.225220 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.225277 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.225969 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05b5a08257768ab03feca7d9732c3a599d23c36babbadf35cb5007f36020b414"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.226036 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://05b5a08257768ab03feca7d9732c3a599d23c36babbadf35cb5007f36020b414" gracePeriod=600 Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.671070 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="05b5a08257768ab03feca7d9732c3a599d23c36babbadf35cb5007f36020b414" exitCode=0 Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.671475 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"05b5a08257768ab03feca7d9732c3a599d23c36babbadf35cb5007f36020b414"} Jan 28 18:50:01 crc kubenswrapper[4721]: I0128 18:50:01.671590 4721 scope.go:117] "RemoveContainer" containerID="1d9cb44706b2f5923bc65487fc2d438c7475d17f3368442164e195f17c4693d2" Jan 28 18:50:02 crc kubenswrapper[4721]: I0128 18:50:02.681325 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"cf577cfdc0b7c29bec411ba83a64318b81b8ea16d7ec474c8974a1dbea166b1d"} Jan 28 18:50:05 crc kubenswrapper[4721]: I0128 18:50:05.127935 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9xvzd" Jan 28 18:50:06 crc kubenswrapper[4721]: I0128 18:50:06.731093 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k5dbx" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.651113 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q4498"] Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.661404 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.675145 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q4498"] Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.694361 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.694611 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cvxwk" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.694759 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.800236 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2qp\" (UniqueName: \"kubernetes.io/projected/58a678ec-dc2e-4d11-9987-8a2901aaea38-kube-api-access-gn2qp\") pod \"openstack-operator-index-q4498\" (UID: \"58a678ec-dc2e-4d11-9987-8a2901aaea38\") " pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.901673 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2qp\" (UniqueName: \"kubernetes.io/projected/58a678ec-dc2e-4d11-9987-8a2901aaea38-kube-api-access-gn2qp\") pod \"openstack-operator-index-q4498\" (UID: \"58a678ec-dc2e-4d11-9987-8a2901aaea38\") " pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:09 crc kubenswrapper[4721]: I0128 18:50:09.924025 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2qp\" (UniqueName: \"kubernetes.io/projected/58a678ec-dc2e-4d11-9987-8a2901aaea38-kube-api-access-gn2qp\") pod \"openstack-operator-index-q4498\" (UID: \"58a678ec-dc2e-4d11-9987-8a2901aaea38\") " pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:10 crc kubenswrapper[4721]: I0128 18:50:10.014352 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:10 crc kubenswrapper[4721]: I0128 18:50:10.442295 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q4498"] Jan 28 18:50:10 crc kubenswrapper[4721]: W0128 18:50:10.447019 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58a678ec_dc2e_4d11_9987_8a2901aaea38.slice/crio-733c1e96f9d0d18ecb357bfcb0a3df81e4f3b952fd9fc9e78a91e95e7391a1c7 WatchSource:0}: Error finding container 733c1e96f9d0d18ecb357bfcb0a3df81e4f3b952fd9fc9e78a91e95e7391a1c7: Status 404 returned error can't find the container with id 733c1e96f9d0d18ecb357bfcb0a3df81e4f3b952fd9fc9e78a91e95e7391a1c7 Jan 28 18:50:10 crc kubenswrapper[4721]: I0128 18:50:10.749984 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q4498" event={"ID":"58a678ec-dc2e-4d11-9987-8a2901aaea38","Type":"ContainerStarted","Data":"733c1e96f9d0d18ecb357bfcb0a3df81e4f3b952fd9fc9e78a91e95e7391a1c7"} Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.004903 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-q4498"] Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.612340 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ckq4p"] Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.613777 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.622997 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ckq4p"] Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.756512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqc6\" (UniqueName: \"kubernetes.io/projected/7e87d639-6eae-44a0-9005-9e5fb2b60b0c-kube-api-access-cwqc6\") pod \"openstack-operator-index-ckq4p\" (UID: \"7e87d639-6eae-44a0-9005-9e5fb2b60b0c\") " pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.858731 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqc6\" (UniqueName: \"kubernetes.io/projected/7e87d639-6eae-44a0-9005-9e5fb2b60b0c-kube-api-access-cwqc6\") pod \"openstack-operator-index-ckq4p\" (UID: \"7e87d639-6eae-44a0-9005-9e5fb2b60b0c\") " pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.878366 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqc6\" (UniqueName: \"kubernetes.io/projected/7e87d639-6eae-44a0-9005-9e5fb2b60b0c-kube-api-access-cwqc6\") pod \"openstack-operator-index-ckq4p\" (UID: \"7e87d639-6eae-44a0-9005-9e5fb2b60b0c\") " pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:13 crc kubenswrapper[4721]: I0128 18:50:13.943733 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:14 crc kubenswrapper[4721]: I0128 18:50:14.336099 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ckq4p"] Jan 28 18:50:14 crc kubenswrapper[4721]: W0128 18:50:14.338124 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e87d639_6eae_44a0_9005_9e5fb2b60b0c.slice/crio-8c259934a2c108fcb8037f2e2875821d3bd9ac31e89343cb1e9da58c889657ec WatchSource:0}: Error finding container 8c259934a2c108fcb8037f2e2875821d3bd9ac31e89343cb1e9da58c889657ec: Status 404 returned error can't find the container with id 8c259934a2c108fcb8037f2e2875821d3bd9ac31e89343cb1e9da58c889657ec Jan 28 18:50:14 crc kubenswrapper[4721]: I0128 18:50:14.786907 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ckq4p" event={"ID":"7e87d639-6eae-44a0-9005-9e5fb2b60b0c","Type":"ContainerStarted","Data":"8c259934a2c108fcb8037f2e2875821d3bd9ac31e89343cb1e9da58c889657ec"} Jan 28 18:50:15 crc kubenswrapper[4721]: I0128 18:50:15.699223 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-94kms" Jan 28 18:50:22 crc kubenswrapper[4721]: I0128 18:50:22.842403 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ckq4p" event={"ID":"7e87d639-6eae-44a0-9005-9e5fb2b60b0c","Type":"ContainerStarted","Data":"9c9c713994ab5c63aed0b15c7b0f7f9ddb72e0375fa0c367fab95ed3c9f2c7a5"} Jan 28 18:50:22 crc kubenswrapper[4721]: I0128 18:50:22.844340 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q4498" event={"ID":"58a678ec-dc2e-4d11-9987-8a2901aaea38","Type":"ContainerStarted","Data":"27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf"} Jan 28 18:50:22 crc kubenswrapper[4721]: I0128 18:50:22.844474 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-q4498" podUID="58a678ec-dc2e-4d11-9987-8a2901aaea38" containerName="registry-server" containerID="cri-o://27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf" gracePeriod=2 Jan 28 18:50:22 crc kubenswrapper[4721]: I0128 18:50:22.865458 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ckq4p" podStartSLOduration=2.543430916 podStartE2EDuration="9.865436508s" podCreationTimestamp="2026-01-28 18:50:13 +0000 UTC" firstStartedPulling="2026-01-28 18:50:14.340421237 +0000 UTC m=+980.065726797" lastFinishedPulling="2026-01-28 18:50:21.662426829 +0000 UTC m=+987.387732389" observedRunningTime="2026-01-28 18:50:22.86199902 +0000 UTC m=+988.587304590" watchObservedRunningTime="2026-01-28 18:50:22.865436508 +0000 UTC m=+988.590742068" Jan 28 18:50:22 crc kubenswrapper[4721]: I0128 18:50:22.880641 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q4498" podStartSLOduration=2.6771278499999998 podStartE2EDuration="13.880616723s" podCreationTimestamp="2026-01-28 18:50:09 +0000 UTC" firstStartedPulling="2026-01-28 18:50:10.448737077 +0000 UTC m=+976.174042637" lastFinishedPulling="2026-01-28 18:50:21.65222595 +0000 UTC m=+987.377531510" observedRunningTime="2026-01-28 18:50:22.880108757 +0000 UTC m=+988.605414347" watchObservedRunningTime="2026-01-28 18:50:22.880616723 +0000 UTC m=+988.605922273" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.238334 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.405844 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2qp\" (UniqueName: \"kubernetes.io/projected/58a678ec-dc2e-4d11-9987-8a2901aaea38-kube-api-access-gn2qp\") pod \"58a678ec-dc2e-4d11-9987-8a2901aaea38\" (UID: \"58a678ec-dc2e-4d11-9987-8a2901aaea38\") " Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.411672 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a678ec-dc2e-4d11-9987-8a2901aaea38-kube-api-access-gn2qp" (OuterVolumeSpecName: "kube-api-access-gn2qp") pod "58a678ec-dc2e-4d11-9987-8a2901aaea38" (UID: "58a678ec-dc2e-4d11-9987-8a2901aaea38"). InnerVolumeSpecName "kube-api-access-gn2qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.507371 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2qp\" (UniqueName: \"kubernetes.io/projected/58a678ec-dc2e-4d11-9987-8a2901aaea38-kube-api-access-gn2qp\") on node \"crc\" DevicePath \"\"" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.851562 4721 generic.go:334] "Generic (PLEG): container finished" podID="58a678ec-dc2e-4d11-9987-8a2901aaea38" containerID="27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf" exitCode=0 Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.851610 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q4498" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.851657 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q4498" event={"ID":"58a678ec-dc2e-4d11-9987-8a2901aaea38","Type":"ContainerDied","Data":"27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf"} Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.851705 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q4498" event={"ID":"58a678ec-dc2e-4d11-9987-8a2901aaea38","Type":"ContainerDied","Data":"733c1e96f9d0d18ecb357bfcb0a3df81e4f3b952fd9fc9e78a91e95e7391a1c7"} Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.851727 4721 scope.go:117] "RemoveContainer" containerID="27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.867933 4721 scope.go:117] "RemoveContainer" containerID="27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf" Jan 28 18:50:23 crc kubenswrapper[4721]: E0128 18:50:23.868417 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf\": container with ID starting with 27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf not found: ID does not exist" containerID="27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.868449 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf"} err="failed to get container status \"27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf\": rpc error: code = NotFound desc = could not find container \"27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf\": container with ID starting with 27b6850e7fbfea0767f6b6ac46dfb66dcef8ff49485cdaad25e61dca400e5ccf not found: ID does not exist" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.875819 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-q4498"] Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.880894 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-q4498"] Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.944593 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.944649 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:23 crc kubenswrapper[4721]: I0128 18:50:23.974875 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:25 crc kubenswrapper[4721]: I0128 18:50:25.538336 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a678ec-dc2e-4d11-9987-8a2901aaea38" path="/var/lib/kubelet/pods/58a678ec-dc2e-4d11-9987-8a2901aaea38/volumes" Jan 28 18:50:33 crc kubenswrapper[4721]: I0128 18:50:33.979303 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ckq4p" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.441802 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2"] Jan 28 18:50:40 crc kubenswrapper[4721]: E0128 18:50:40.442804 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a678ec-dc2e-4d11-9987-8a2901aaea38" containerName="registry-server" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.442819 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a678ec-dc2e-4d11-9987-8a2901aaea38" containerName="registry-server" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.442967 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a678ec-dc2e-4d11-9987-8a2901aaea38" containerName="registry-server" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.444123 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.447325 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-42mwq" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.497350 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2"] Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.566643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-util\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.566786 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwnr\" (UniqueName: \"kubernetes.io/projected/ab608a64-70fd-498e-9aa6-d2dd87a017b9-kube-api-access-xwwnr\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.566828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-bundle\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.668116 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-bundle\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.668307 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-util\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.668363 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwwnr\" (UniqueName: \"kubernetes.io/projected/ab608a64-70fd-498e-9aa6-d2dd87a017b9-kube-api-access-xwwnr\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.669284 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-bundle\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.669318 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-util\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.702895 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwwnr\" (UniqueName: \"kubernetes.io/projected/ab608a64-70fd-498e-9aa6-d2dd87a017b9-kube-api-access-xwwnr\") pod \"5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:40 crc kubenswrapper[4721]: I0128 18:50:40.786386 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:41 crc kubenswrapper[4721]: I0128 18:50:41.268962 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2"] Jan 28 18:50:41 crc kubenswrapper[4721]: I0128 18:50:41.980947 4721 generic.go:334] "Generic (PLEG): container finished" podID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerID="114564bee845f897e52d435413e5f1d96212b80130436efc7be163d4bac46611" exitCode=0 Jan 28 18:50:41 crc kubenswrapper[4721]: I0128 18:50:41.981038 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" event={"ID":"ab608a64-70fd-498e-9aa6-d2dd87a017b9","Type":"ContainerDied","Data":"114564bee845f897e52d435413e5f1d96212b80130436efc7be163d4bac46611"} Jan 28 18:50:41 crc kubenswrapper[4721]: I0128 18:50:41.981337 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" event={"ID":"ab608a64-70fd-498e-9aa6-d2dd87a017b9","Type":"ContainerStarted","Data":"8ebcdc197294bbc9260fb7fef9ac2ac7317a030d6ac713079862aaa6bd183475"} Jan 28 18:50:41 crc kubenswrapper[4721]: I0128 18:50:41.982842 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:50:42 crc kubenswrapper[4721]: I0128 18:50:42.989533 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" event={"ID":"ab608a64-70fd-498e-9aa6-d2dd87a017b9","Type":"ContainerStarted","Data":"e4f2c20a64e4c0e1c2fd5dbedec4efff48b1f05ad739b8093234fbd80a831c1d"} Jan 28 18:50:43 crc kubenswrapper[4721]: I0128 18:50:43.999774 4721 generic.go:334] "Generic (PLEG): container finished" podID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerID="e4f2c20a64e4c0e1c2fd5dbedec4efff48b1f05ad739b8093234fbd80a831c1d" exitCode=0 Jan 28 18:50:43 crc kubenswrapper[4721]: I0128 18:50:43.999817 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" event={"ID":"ab608a64-70fd-498e-9aa6-d2dd87a017b9","Type":"ContainerDied","Data":"e4f2c20a64e4c0e1c2fd5dbedec4efff48b1f05ad739b8093234fbd80a831c1d"} Jan 28 18:50:45 crc kubenswrapper[4721]: I0128 18:50:45.009232 4721 generic.go:334] "Generic (PLEG): container finished" podID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerID="de5f2b595015acd9ef4def337ce1ed254032a8520f0357535d3b4e8cb237ab29" exitCode=0 Jan 28 18:50:45 crc kubenswrapper[4721]: I0128 18:50:45.009293 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" event={"ID":"ab608a64-70fd-498e-9aa6-d2dd87a017b9","Type":"ContainerDied","Data":"de5f2b595015acd9ef4def337ce1ed254032a8520f0357535d3b4e8cb237ab29"} Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.309839 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.451001 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwwnr\" (UniqueName: \"kubernetes.io/projected/ab608a64-70fd-498e-9aa6-d2dd87a017b9-kube-api-access-xwwnr\") pod \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.451248 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-bundle\") pod \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.451308 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-util\") pod \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\" (UID: \"ab608a64-70fd-498e-9aa6-d2dd87a017b9\") " Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.451998 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-bundle" (OuterVolumeSpecName: "bundle") pod "ab608a64-70fd-498e-9aa6-d2dd87a017b9" (UID: "ab608a64-70fd-498e-9aa6-d2dd87a017b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.457995 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab608a64-70fd-498e-9aa6-d2dd87a017b9-kube-api-access-xwwnr" (OuterVolumeSpecName: "kube-api-access-xwwnr") pod "ab608a64-70fd-498e-9aa6-d2dd87a017b9" (UID: "ab608a64-70fd-498e-9aa6-d2dd87a017b9"). InnerVolumeSpecName "kube-api-access-xwwnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.465628 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-util" (OuterVolumeSpecName: "util") pod "ab608a64-70fd-498e-9aa6-d2dd87a017b9" (UID: "ab608a64-70fd-498e-9aa6-d2dd87a017b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.553539 4721 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.553597 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwwnr\" (UniqueName: \"kubernetes.io/projected/ab608a64-70fd-498e-9aa6-d2dd87a017b9-kube-api-access-xwwnr\") on node \"crc\" DevicePath \"\"" Jan 28 18:50:46 crc kubenswrapper[4721]: I0128 18:50:46.553612 4721 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab608a64-70fd-498e-9aa6-d2dd87a017b9-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:50:47 crc kubenswrapper[4721]: I0128 18:50:47.025528 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" event={"ID":"ab608a64-70fd-498e-9aa6-d2dd87a017b9","Type":"ContainerDied","Data":"8ebcdc197294bbc9260fb7fef9ac2ac7317a030d6ac713079862aaa6bd183475"} Jan 28 18:50:47 crc kubenswrapper[4721]: I0128 18:50:47.025569 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebcdc197294bbc9260fb7fef9ac2ac7317a030d6ac713079862aaa6bd183475" Jan 28 18:50:47 crc kubenswrapper[4721]: I0128 18:50:47.025587 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.625505 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd"] Jan 28 18:50:52 crc kubenswrapper[4721]: E0128 18:50:52.626327 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="util" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.626340 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="util" Jan 28 18:50:52 crc kubenswrapper[4721]: E0128 18:50:52.626351 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="extract" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.626357 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="extract" Jan 28 18:50:52 crc kubenswrapper[4721]: E0128 18:50:52.626375 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="pull" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.626381 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="pull" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.626493 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab608a64-70fd-498e-9aa6-d2dd87a017b9" containerName="extract" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.626951 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.629734 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jd28p" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.719111 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd"] Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.747573 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6h5s\" (UniqueName: \"kubernetes.io/projected/d2642d34-9e91-460a-a889-42776f2201cc-kube-api-access-t6h5s\") pod \"openstack-operator-controller-init-858cbdb9cd-v7bpd\" (UID: \"d2642d34-9e91-460a-a889-42776f2201cc\") " pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.848767 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6h5s\" (UniqueName: \"kubernetes.io/projected/d2642d34-9e91-460a-a889-42776f2201cc-kube-api-access-t6h5s\") pod \"openstack-operator-controller-init-858cbdb9cd-v7bpd\" (UID: \"d2642d34-9e91-460a-a889-42776f2201cc\") " pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.888608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6h5s\" (UniqueName: \"kubernetes.io/projected/d2642d34-9e91-460a-a889-42776f2201cc-kube-api-access-t6h5s\") pod \"openstack-operator-controller-init-858cbdb9cd-v7bpd\" (UID: \"d2642d34-9e91-460a-a889-42776f2201cc\") " pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:50:52 crc kubenswrapper[4721]: I0128 18:50:52.947330 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:50:53 crc kubenswrapper[4721]: I0128 18:50:53.442274 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd"] Jan 28 18:50:54 crc kubenswrapper[4721]: I0128 18:50:54.085252 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" event={"ID":"d2642d34-9e91-460a-a889-42776f2201cc","Type":"ContainerStarted","Data":"6c21d6d4fef99cf7a8c883779daaf26210b01303a485ccb9b27828fcf352b334"} Jan 28 18:50:59 crc kubenswrapper[4721]: I0128 18:50:59.126935 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" event={"ID":"d2642d34-9e91-460a-a889-42776f2201cc","Type":"ContainerStarted","Data":"86cfb6ecfeb8baf8799971167c6b8343a8887aabdfd7499534d063bcfe6bb117"} Jan 28 18:50:59 crc kubenswrapper[4721]: I0128 18:50:59.127621 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:50:59 crc kubenswrapper[4721]: I0128 18:50:59.159564 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" podStartSLOduration=1.774625167 podStartE2EDuration="7.159545959s" podCreationTimestamp="2026-01-28 18:50:52 +0000 UTC" firstStartedPulling="2026-01-28 18:50:53.446137223 +0000 UTC m=+1019.171442783" lastFinishedPulling="2026-01-28 18:50:58.831058015 +0000 UTC m=+1024.556363575" observedRunningTime="2026-01-28 18:50:59.156797123 +0000 UTC m=+1024.882102693" watchObservedRunningTime="2026-01-28 18:50:59.159545959 +0000 UTC m=+1024.884851519" Jan 28 18:51:12 crc kubenswrapper[4721]: I0128 18:51:12.950022 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-858cbdb9cd-v7bpd" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.508533 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.509856 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.515090 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8cvbd" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.524380 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.526920 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.541858 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.547753 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.548756 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mtjjv" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.552306 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.555688 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-c794c" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.568338 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.593051 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.599613 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.600801 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.603200 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rhjp2" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.618128 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.636866 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.639998 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.641993 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5t8fb" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.666141 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.666937 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvc6\" (UniqueName: \"kubernetes.io/projected/d258bf47-a441-49ad-a3ad-d5c04c615c9c-kube-api-access-sjvc6\") pod \"cinder-operator-controller-manager-f6487bd57-c9pmg\" (UID: \"d258bf47-a441-49ad-a3ad-d5c04c615c9c\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.666980 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq2q\" (UniqueName: \"kubernetes.io/projected/5f5dbe82-6a18-47da-98e6-00d10a32d1eb-kube-api-access-4tq2q\") pod \"designate-operator-controller-manager-66dfbd6f5d-dbf9z\" (UID: \"5f5dbe82-6a18-47da-98e6-00d10a32d1eb\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.667070 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf88l\" (UniqueName: \"kubernetes.io/projected/99e08199-2cc8-4f41-8310-f63c0a021a98-kube-api-access-lf88l\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-pv6ph\" (UID: \"99e08199-2cc8-4f41-8310-f63c0a021a98\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.688434 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.689948 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.697859 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wprf4" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.723952 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-fd75h"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.725405 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.728855 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9gf5q" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.728856 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.739847 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.772506 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-fd75h"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.773559 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf88l\" (UniqueName: \"kubernetes.io/projected/99e08199-2cc8-4f41-8310-f63c0a021a98-kube-api-access-lf88l\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-pv6ph\" (UID: \"99e08199-2cc8-4f41-8310-f63c0a021a98\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.773633 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnkt\" (UniqueName: \"kubernetes.io/projected/6ec8e4f3-a711-43af-81da-91be5695e927-kube-api-access-xhnkt\") pod \"heat-operator-controller-manager-587c6bfdcf-r46mm\" (UID: \"6ec8e4f3-a711-43af-81da-91be5695e927\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.773697 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvc6\" (UniqueName: \"kubernetes.io/projected/d258bf47-a441-49ad-a3ad-d5c04c615c9c-kube-api-access-sjvc6\") pod \"cinder-operator-controller-manager-f6487bd57-c9pmg\" (UID: \"d258bf47-a441-49ad-a3ad-d5c04c615c9c\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.773732 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tq2q\" (UniqueName: \"kubernetes.io/projected/5f5dbe82-6a18-47da-98e6-00d10a32d1eb-kube-api-access-4tq2q\") pod \"designate-operator-controller-manager-66dfbd6f5d-dbf9z\" (UID: \"5f5dbe82-6a18-47da-98e6-00d10a32d1eb\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.773760 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjv29\" (UniqueName: \"kubernetes.io/projected/6e4d4bd0-d6ac-4268-bc08-86d74adfc33b-kube-api-access-rjv29\") pod \"glance-operator-controller-manager-6db5dbd896-7brt7\" (UID: \"6e4d4bd0-d6ac-4268-bc08-86d74adfc33b\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.787045 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.788022 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.791419 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-t84qf" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.802244 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.803403 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.805880 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-q572x" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.811584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tq2q\" (UniqueName: \"kubernetes.io/projected/5f5dbe82-6a18-47da-98e6-00d10a32d1eb-kube-api-access-4tq2q\") pod \"designate-operator-controller-manager-66dfbd6f5d-dbf9z\" (UID: \"5f5dbe82-6a18-47da-98e6-00d10a32d1eb\") " pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.812710 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.817897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvc6\" (UniqueName: \"kubernetes.io/projected/d258bf47-a441-49ad-a3ad-d5c04c615c9c-kube-api-access-sjvc6\") pod \"cinder-operator-controller-manager-f6487bd57-c9pmg\" (UID: \"d258bf47-a441-49ad-a3ad-d5c04c615c9c\") " pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.824279 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.828888 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf88l\" (UniqueName: \"kubernetes.io/projected/99e08199-2cc8-4f41-8310-f63c0a021a98-kube-api-access-lf88l\") pod \"barbican-operator-controller-manager-6bc7f4f4cf-pv6ph\" (UID: \"99e08199-2cc8-4f41-8310-f63c0a021a98\") " pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.839917 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-mjxvn"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.841445 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.844294 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.858601 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.865934 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-q8fhv" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.876372 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-mjxvn"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.877112 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjv29\" (UniqueName: \"kubernetes.io/projected/6e4d4bd0-d6ac-4268-bc08-86d74adfc33b-kube-api-access-rjv29\") pod \"glance-operator-controller-manager-6db5dbd896-7brt7\" (UID: \"6e4d4bd0-d6ac-4268-bc08-86d74adfc33b\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.877199 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fg9l\" (UniqueName: \"kubernetes.io/projected/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-kube-api-access-6fg9l\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.877273 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnhg\" (UniqueName: \"kubernetes.io/projected/18c18118-f643-4590-9e07-87bffdb4195b-kube-api-access-lxnhg\") pod \"horizon-operator-controller-manager-5fb775575f-6m2fr\" (UID: \"18c18118-f643-4590-9e07-87bffdb4195b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.877355 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b8lh\" (UniqueName: \"kubernetes.io/projected/7650ad3f-87f7-4c9a-b795-678ebc7edc7d-kube-api-access-7b8lh\") pod \"ironic-operator-controller-manager-958664b5-wrzbl\" (UID: \"7650ad3f-87f7-4c9a-b795-678ebc7edc7d\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.877383 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.877412 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnkt\" (UniqueName: \"kubernetes.io/projected/6ec8e4f3-a711-43af-81da-91be5695e927-kube-api-access-xhnkt\") pod \"heat-operator-controller-manager-587c6bfdcf-r46mm\" (UID: \"6ec8e4f3-a711-43af-81da-91be5695e927\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.887216 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.887306 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.888997 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.899738 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kzpxh" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.938697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjv29\" (UniqueName: \"kubernetes.io/projected/6e4d4bd0-d6ac-4268-bc08-86d74adfc33b-kube-api-access-rjv29\") pod \"glance-operator-controller-manager-6db5dbd896-7brt7\" (UID: \"6e4d4bd0-d6ac-4268-bc08-86d74adfc33b\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.961433 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4"] Jan 28 18:51:31 crc kubenswrapper[4721]: I0128 18:51:31.962587 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.003072 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnkt\" (UniqueName: \"kubernetes.io/projected/6ec8e4f3-a711-43af-81da-91be5695e927-kube-api-access-xhnkt\") pod \"heat-operator-controller-manager-587c6bfdcf-r46mm\" (UID: \"6ec8e4f3-a711-43af-81da-91be5695e927\") " pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.009146 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dfrpg" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.032872 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnhg\" (UniqueName: \"kubernetes.io/projected/18c18118-f643-4590-9e07-87bffdb4195b-kube-api-access-lxnhg\") pod \"horizon-operator-controller-manager-5fb775575f-6m2fr\" (UID: \"18c18118-f643-4590-9e07-87bffdb4195b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.036403 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mrvj\" (UniqueName: \"kubernetes.io/projected/f901f512-8af4-4e6c-abc8-0fd7d0f26ef3-kube-api-access-6mrvj\") pod \"mariadb-operator-controller-manager-67bf948998-pt757\" (UID: \"f901f512-8af4-4e6c-abc8-0fd7d0f26ef3\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.036554 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twkpk\" (UniqueName: \"kubernetes.io/projected/e8f6f9a2-7886-4896-baac-268e88869bb2-kube-api-access-twkpk\") pod \"keystone-operator-controller-manager-6978b79747-vc75z\" (UID: \"e8f6f9a2-7886-4896-baac-268e88869bb2\") " pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.036801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b8lh\" (UniqueName: \"kubernetes.io/projected/7650ad3f-87f7-4c9a-b795-678ebc7edc7d-kube-api-access-7b8lh\") pod \"ironic-operator-controller-manager-958664b5-wrzbl\" (UID: \"7650ad3f-87f7-4c9a-b795-678ebc7edc7d\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.037479 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.037686 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fg9l\" (UniqueName: \"kubernetes.io/projected/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-kube-api-access-6fg9l\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.037825 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklhv\" (UniqueName: \"kubernetes.io/projected/835d5df3-4ea1-40ce-9bad-325396bfd41f-kube-api-access-wklhv\") pod \"manila-operator-controller-manager-765668569f-mjxvn\" (UID: \"835d5df3-4ea1-40ce-9bad-325396bfd41f\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:51:32 crc kubenswrapper[4721]: E0128 18:51:32.038211 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:32 crc kubenswrapper[4721]: E0128 18:51:32.038366 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert podName:66d34dd5-6c67-40ec-8fc8-16320a5aef1d nodeName:}" failed. No retries permitted until 2026-01-28 18:51:32.538344638 +0000 UTC m=+1058.263650198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert") pod "infra-operator-controller-manager-79955696d6-fd75h" (UID: "66d34dd5-6c67-40ec-8fc8-16320a5aef1d") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.056790 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.124375 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnhg\" (UniqueName: \"kubernetes.io/projected/18c18118-f643-4590-9e07-87bffdb4195b-kube-api-access-lxnhg\") pod \"horizon-operator-controller-manager-5fb775575f-6m2fr\" (UID: \"18c18118-f643-4590-9e07-87bffdb4195b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.134828 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fg9l\" (UniqueName: \"kubernetes.io/projected/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-kube-api-access-6fg9l\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.140265 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mrvj\" (UniqueName: \"kubernetes.io/projected/f901f512-8af4-4e6c-abc8-0fd7d0f26ef3-kube-api-access-6mrvj\") pod \"mariadb-operator-controller-manager-67bf948998-pt757\" (UID: \"f901f512-8af4-4e6c-abc8-0fd7d0f26ef3\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.140323 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twkpk\" (UniqueName: \"kubernetes.io/projected/e8f6f9a2-7886-4896-baac-268e88869bb2-kube-api-access-twkpk\") pod \"keystone-operator-controller-manager-6978b79747-vc75z\" (UID: \"e8f6f9a2-7886-4896-baac-268e88869bb2\") " pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.140353 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhbk\" (UniqueName: \"kubernetes.io/projected/b102209d-5846-40f2-bb20-7022d18b9a28-kube-api-access-qfhbk\") pod \"neutron-operator-controller-manager-694c5bfc85-hv7r4\" (UID: \"b102209d-5846-40f2-bb20-7022d18b9a28\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.140444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wklhv\" (UniqueName: \"kubernetes.io/projected/835d5df3-4ea1-40ce-9bad-325396bfd41f-kube-api-access-wklhv\") pod \"manila-operator-controller-manager-765668569f-mjxvn\" (UID: \"835d5df3-4ea1-40ce-9bad-325396bfd41f\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.152034 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b8lh\" (UniqueName: \"kubernetes.io/projected/7650ad3f-87f7-4c9a-b795-678ebc7edc7d-kube-api-access-7b8lh\") pod \"ironic-operator-controller-manager-958664b5-wrzbl\" (UID: \"7650ad3f-87f7-4c9a-b795-678ebc7edc7d\") " pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.165802 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.188690 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.194693 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.199952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twkpk\" (UniqueName: \"kubernetes.io/projected/e8f6f9a2-7886-4896-baac-268e88869bb2-kube-api-access-twkpk\") pod \"keystone-operator-controller-manager-6978b79747-vc75z\" (UID: \"e8f6f9a2-7886-4896-baac-268e88869bb2\") " pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.200425 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fw4f9" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.202588 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mrvj\" (UniqueName: \"kubernetes.io/projected/f901f512-8af4-4e6c-abc8-0fd7d0f26ef3-kube-api-access-6mrvj\") pod \"mariadb-operator-controller-manager-67bf948998-pt757\" (UID: \"f901f512-8af4-4e6c-abc8-0fd7d0f26ef3\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.223413 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.224592 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wklhv\" (UniqueName: \"kubernetes.io/projected/835d5df3-4ea1-40ce-9bad-325396bfd41f-kube-api-access-wklhv\") pod \"manila-operator-controller-manager-765668569f-mjxvn\" (UID: \"835d5df3-4ea1-40ce-9bad-325396bfd41f\") " pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.224888 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.226108 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.230631 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-55z89" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.248196 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhbk\" (UniqueName: \"kubernetes.io/projected/b102209d-5846-40f2-bb20-7022d18b9a28-kube-api-access-qfhbk\") pod \"neutron-operator-controller-manager-694c5bfc85-hv7r4\" (UID: \"b102209d-5846-40f2-bb20-7022d18b9a28\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.270017 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.272915 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.323318 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.323833 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.324643 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhbk\" (UniqueName: \"kubernetes.io/projected/b102209d-5846-40f2-bb20-7022d18b9a28-kube-api-access-qfhbk\") pod \"neutron-operator-controller-manager-694c5bfc85-hv7r4\" (UID: \"b102209d-5846-40f2-bb20-7022d18b9a28\") " pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.349659 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8fd\" (UniqueName: \"kubernetes.io/projected/8e4e395a-5b06-45ea-a2af-8a7a1180fc80-kube-api-access-4j8fd\") pod \"nova-operator-controller-manager-ddcbfd695-ghpgf\" (UID: \"8e4e395a-5b06-45ea-a2af-8a7a1180fc80\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.349748 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hldh\" (UniqueName: \"kubernetes.io/projected/073e6433-4ca4-499a-8c82-0fda8211ecd3-kube-api-access-5hldh\") pod \"octavia-operator-controller-manager-5c765b4558-r996h\" (UID: \"073e6433-4ca4-499a-8c82-0fda8211ecd3\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.363614 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.366988 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.394652 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.399282 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.400527 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.403591 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.406876 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kmnp6" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.416115 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.418954 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.429036 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.434916 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-w25gx" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.438634 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.450758 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.452506 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j8fd\" (UniqueName: \"kubernetes.io/projected/8e4e395a-5b06-45ea-a2af-8a7a1180fc80-kube-api-access-4j8fd\") pod \"nova-operator-controller-manager-ddcbfd695-ghpgf\" (UID: \"8e4e395a-5b06-45ea-a2af-8a7a1180fc80\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.452553 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hldh\" (UniqueName: \"kubernetes.io/projected/073e6433-4ca4-499a-8c82-0fda8211ecd3-kube-api-access-5hldh\") pod \"octavia-operator-controller-manager-5c765b4558-r996h\" (UID: \"073e6433-4ca4-499a-8c82-0fda8211ecd3\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.487234 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.488927 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.499782 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j8fd\" (UniqueName: \"kubernetes.io/projected/8e4e395a-5b06-45ea-a2af-8a7a1180fc80-kube-api-access-4j8fd\") pod \"nova-operator-controller-manager-ddcbfd695-ghpgf\" (UID: \"8e4e395a-5b06-45ea-a2af-8a7a1180fc80\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.503776 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.504837 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.505070 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hldh\" (UniqueName: \"kubernetes.io/projected/073e6433-4ca4-499a-8c82-0fda8211ecd3-kube-api-access-5hldh\") pod \"octavia-operator-controller-manager-5c765b4558-r996h\" (UID: \"073e6433-4ca4-499a-8c82-0fda8211ecd3\") " pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.507061 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fkbl2" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.507348 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bdcsn" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.507572 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.522429 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.546054 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.549198 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.558239 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzgd8\" (UniqueName: \"kubernetes.io/projected/2cea4626-d7bc-4166-9c63-8aa4e6358bd3-kube-api-access-jzgd8\") pod \"ovn-operator-controller-manager-788c46999f-js7f2\" (UID: \"2cea4626-d7bc-4166-9c63-8aa4e6358bd3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.558338 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fh7x\" (UniqueName: \"kubernetes.io/projected/9c28be52-26d0-4dd5-a3ca-ba3d9888dae8-kube-api-access-5fh7x\") pod \"placement-operator-controller-manager-5b964cf4cd-gdb9m\" (UID: \"9c28be52-26d0-4dd5-a3ca-ba3d9888dae8\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.558400 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:32 crc kubenswrapper[4721]: E0128 18:51:32.558568 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:32 crc kubenswrapper[4721]: E0128 18:51:32.558631 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert podName:66d34dd5-6c67-40ec-8fc8-16320a5aef1d nodeName:}" failed. No retries permitted until 2026-01-28 18:51:33.558610691 +0000 UTC m=+1059.283916251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert") pod "infra-operator-controller-manager-79955696d6-fd75h" (UID: "66d34dd5-6c67-40ec-8fc8-16320a5aef1d") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.558980 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.560187 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.566803 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.567715 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.568826 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-b9n7n" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.578383 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4r76l" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.582579 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.617745 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659433 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhzw\" (UniqueName: \"kubernetes.io/projected/021232bf-9e53-4907-80a0-702807db3f23-kube-api-access-qmhzw\") pod \"swift-operator-controller-manager-68fc8c869-9sqtl\" (UID: \"021232bf-9e53-4907-80a0-702807db3f23\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659562 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659636 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxgl\" (UniqueName: \"kubernetes.io/projected/4bc4914a-125f-48f5-a7df-dbc170eaddd9-kube-api-access-dbxgl\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659674 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzgd8\" (UniqueName: \"kubernetes.io/projected/2cea4626-d7bc-4166-9c63-8aa4e6358bd3-kube-api-access-jzgd8\") pod \"ovn-operator-controller-manager-788c46999f-js7f2\" (UID: \"2cea4626-d7bc-4166-9c63-8aa4e6358bd3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659721 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hklsj\" (UniqueName: \"kubernetes.io/projected/b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6-kube-api-access-hklsj\") pod \"watcher-operator-controller-manager-767b8bc766-tkgcv\" (UID: \"b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659775 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9bg\" (UniqueName: \"kubernetes.io/projected/83f4e7da-0144-44a8-886e-7f8c60f56014-kube-api-access-5l9bg\") pod \"telemetry-operator-controller-manager-877d65859-2rn2n\" (UID: \"83f4e7da-0144-44a8-886e-7f8c60f56014\") " pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.659813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fh7x\" (UniqueName: \"kubernetes.io/projected/9c28be52-26d0-4dd5-a3ca-ba3d9888dae8-kube-api-access-5fh7x\") pod \"placement-operator-controller-manager-5b964cf4cd-gdb9m\" (UID: \"9c28be52-26d0-4dd5-a3ca-ba3d9888dae8\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.681534 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.682521 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.686154 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-p27lk" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.699922 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.711691 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.737545 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzgd8\" (UniqueName: \"kubernetes.io/projected/2cea4626-d7bc-4166-9c63-8aa4e6358bd3-kube-api-access-jzgd8\") pod \"ovn-operator-controller-manager-788c46999f-js7f2\" (UID: \"2cea4626-d7bc-4166-9c63-8aa4e6358bd3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.747742 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fh7x\" (UniqueName: \"kubernetes.io/projected/9c28be52-26d0-4dd5-a3ca-ba3d9888dae8-kube-api-access-5fh7x\") pod \"placement-operator-controller-manager-5b964cf4cd-gdb9m\" (UID: \"9c28be52-26d0-4dd5-a3ca-ba3d9888dae8\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.760808 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.760884 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbxgl\" (UniqueName: \"kubernetes.io/projected/4bc4914a-125f-48f5-a7df-dbc170eaddd9-kube-api-access-dbxgl\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.760975 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hklsj\" (UniqueName: \"kubernetes.io/projected/b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6-kube-api-access-hklsj\") pod \"watcher-operator-controller-manager-767b8bc766-tkgcv\" (UID: \"b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.761012 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l9bg\" (UniqueName: \"kubernetes.io/projected/83f4e7da-0144-44a8-886e-7f8c60f56014-kube-api-access-5l9bg\") pod \"telemetry-operator-controller-manager-877d65859-2rn2n\" (UID: \"83f4e7da-0144-44a8-886e-7f8c60f56014\") " pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.761062 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmhzw\" (UniqueName: \"kubernetes.io/projected/021232bf-9e53-4907-80a0-702807db3f23-kube-api-access-qmhzw\") pod \"swift-operator-controller-manager-68fc8c869-9sqtl\" (UID: \"021232bf-9e53-4907-80a0-702807db3f23\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:51:32 crc kubenswrapper[4721]: E0128 18:51:32.761592 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:32 crc kubenswrapper[4721]: E0128 18:51:32.761654 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert podName:4bc4914a-125f-48f5-a7df-dbc170eaddd9 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:33.261634073 +0000 UTC m=+1058.986939633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" (UID: "4bc4914a-125f-48f5-a7df-dbc170eaddd9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.784809 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.805473 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l9bg\" (UniqueName: \"kubernetes.io/projected/83f4e7da-0144-44a8-886e-7f8c60f56014-kube-api-access-5l9bg\") pod \"telemetry-operator-controller-manager-877d65859-2rn2n\" (UID: \"83f4e7da-0144-44a8-886e-7f8c60f56014\") " pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.807829 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmhzw\" (UniqueName: \"kubernetes.io/projected/021232bf-9e53-4907-80a0-702807db3f23-kube-api-access-qmhzw\") pod \"swift-operator-controller-manager-68fc8c869-9sqtl\" (UID: \"021232bf-9e53-4907-80a0-702807db3f23\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.814205 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hklsj\" (UniqueName: \"kubernetes.io/projected/b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6-kube-api-access-hklsj\") pod \"watcher-operator-controller-manager-767b8bc766-tkgcv\" (UID: \"b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.814595 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.822250 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbxgl\" (UniqueName: \"kubernetes.io/projected/4bc4914a-125f-48f5-a7df-dbc170eaddd9-kube-api-access-dbxgl\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.833889 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.835295 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.847552 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.850697 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.854854 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d6qfn" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.855255 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.858183 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv"] Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.898561 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/066c13ce-1239-494e-bbc6-d175c62c501c-kube-api-access-kqmxf\") pod \"test-operator-controller-manager-56f8bfcd9f-f56rw\" (UID: \"066c13ce-1239-494e-bbc6-d175c62c501c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.932343 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:51:32 crc kubenswrapper[4721]: I0128 18:51:32.944794 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.003730 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.018862 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.041839 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-62xl6" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.042161 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.043438 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.043557 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwfc\" (UniqueName: \"kubernetes.io/projected/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-kube-api-access-sfwfc\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.043656 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/066c13ce-1239-494e-bbc6-d175c62c501c-kube-api-access-kqmxf\") pod \"test-operator-controller-manager-56f8bfcd9f-f56rw\" (UID: \"066c13ce-1239-494e-bbc6-d175c62c501c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.043701 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.068026 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmxf\" (UniqueName: \"kubernetes.io/projected/066c13ce-1239-494e-bbc6-d175c62c501c-kube-api-access-kqmxf\") pod \"test-operator-controller-manager-56f8bfcd9f-f56rw\" (UID: \"066c13ce-1239-494e-bbc6-d175c62c501c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.094232 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.146105 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.146184 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwfc\" (UniqueName: \"kubernetes.io/projected/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-kube-api-access-sfwfc\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.146242 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lml\" (UniqueName: \"kubernetes.io/projected/a39fc394-2b18-4c7c-a780-0147ddb3a77a-kube-api-access-45lml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vprhw\" (UID: \"a39fc394-2b18-4c7c-a780-0147ddb3a77a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.146266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.146440 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.146490 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:33.646472503 +0000 UTC m=+1059.371778063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "metrics-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.147668 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.147721 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:33.647709701 +0000 UTC m=+1059.373015261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.181817 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwfc\" (UniqueName: \"kubernetes.io/projected/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-kube-api-access-sfwfc\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.197078 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.247602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45lml\" (UniqueName: \"kubernetes.io/projected/a39fc394-2b18-4c7c-a780-0147ddb3a77a-kube-api-access-45lml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vprhw\" (UID: \"a39fc394-2b18-4c7c-a780-0147ddb3a77a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.288409 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lml\" (UniqueName: \"kubernetes.io/projected/a39fc394-2b18-4c7c-a780-0147ddb3a77a-kube-api-access-45lml\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vprhw\" (UID: \"a39fc394-2b18-4c7c-a780-0147ddb3a77a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.344625 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.363377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.363967 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.364011 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert podName:4bc4914a-125f-48f5-a7df-dbc170eaddd9 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:34.363996479 +0000 UTC m=+1060.089302039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" (UID: "4bc4914a-125f-48f5-a7df-dbc170eaddd9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.367199 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.425938 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.461842 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" event={"ID":"99e08199-2cc8-4f41-8310-f63c0a021a98","Type":"ContainerStarted","Data":"7d692ed04ec6c3a2b8a4421e5b2c66644f561eaa8cf6b856b4a37a2d2ed1ee12"} Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.465839 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" event={"ID":"5f5dbe82-6a18-47da-98e6-00d10a32d1eb","Type":"ContainerStarted","Data":"a1b847421fdb7e988c8c041a8658eae955ba379b0d09dde2b3ccf82b5f0f591e"} Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.506291 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.570444 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.570786 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.570845 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert podName:66d34dd5-6c67-40ec-8fc8-16320a5aef1d nodeName:}" failed. No retries permitted until 2026-01-28 18:51:35.57082639 +0000 UTC m=+1061.296131950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert") pod "infra-operator-controller-manager-79955696d6-fd75h" (UID: "66d34dd5-6c67-40ec-8fc8-16320a5aef1d") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.607665 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.679072 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.679351 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.679517 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.679735 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:34.679713572 +0000 UTC m=+1060.405019132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "metrics-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.686538 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: E0128 18:51:33.686654 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:34.68663553 +0000 UTC m=+1060.411941090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "webhook-server-cert" not found Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.909930 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757"] Jan 28 18:51:33 crc kubenswrapper[4721]: I0128 18:51:33.929334 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.086389 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.101962 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4"] Jan 28 18:51:34 crc kubenswrapper[4721]: W0128 18:51:34.111765 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb102209d_5846_40f2_bb20_7022d18b9a28.slice/crio-49414db82e9b902de18996c2b269ad9eb35dec36e370707d3144f013916193e2 WatchSource:0}: Error finding container 49414db82e9b902de18996c2b269ad9eb35dec36e370707d3144f013916193e2: Status 404 returned error can't find the container with id 49414db82e9b902de18996c2b269ad9eb35dec36e370707d3144f013916193e2 Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.412184 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.412430 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.412517 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert podName:4bc4914a-125f-48f5-a7df-dbc170eaddd9 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:36.412495284 +0000 UTC m=+1062.137800844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" (UID: "4bc4914a-125f-48f5-a7df-dbc170eaddd9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.465504 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.489249 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.494089 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.501126 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.507799 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-765668569f-mjxvn"] Jan 28 18:51:34 crc kubenswrapper[4721]: W0128 18:51:34.509235 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cea4626_d7bc_4166_9c63_8aa4e6358bd3.slice/crio-0404d0d6487bc89f682bb362472b3edc5641daba5bcc23c66032bc609e4157f8 WatchSource:0}: Error finding container 0404d0d6487bc89f682bb362472b3edc5641daba5bcc23c66032bc609e4157f8: Status 404 returned error can't find the container with id 0404d0d6487bc89f682bb362472b3edc5641daba5bcc23c66032bc609e4157f8 Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.516185 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf"] Jan 28 18:51:34 crc kubenswrapper[4721]: W0128 18:51:34.517021 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod073e6433_4ca4_499a_8c82_0fda8211ecd3.slice/crio-938fee9fcf09087c70cd2a29f31adda279b8e097216f7e840350598310896af3 WatchSource:0}: Error finding container 938fee9fcf09087c70cd2a29f31adda279b8e097216f7e840350598310896af3: Status 404 returned error can't find the container with id 938fee9fcf09087c70cd2a29f31adda279b8e097216f7e840350598310896af3 Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.525326 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.532065 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr"] Jan 28 18:51:34 crc kubenswrapper[4721]: W0128 18:51:34.533267 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c28be52_26d0_4dd5_a3ca_ba3d9888dae8.slice/crio-de5fcb45f1767cb7d9f62bec4caa2a4236d157275308be9251d140e56a04fbf7 WatchSource:0}: Error finding container de5fcb45f1767cb7d9f62bec4caa2a4236d157275308be9251d140e56a04fbf7: Status 404 returned error can't find the container with id de5fcb45f1767cb7d9f62bec4caa2a4236d157275308be9251d140e56a04fbf7 Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.546779 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" event={"ID":"7650ad3f-87f7-4c9a-b795-678ebc7edc7d","Type":"ContainerStarted","Data":"39516ad4c8eb3099fa1333c9ce9ebda8c275dfde12e716bd088497594485ba90"} Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.551482 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.562430 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" event={"ID":"6ec8e4f3-a711-43af-81da-91be5695e927","Type":"ContainerStarted","Data":"cbb4f7c0272972531095090e9f7ff56b09a25d300febbd9f540a37fa47f806b2"} Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.566634 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5l9bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-877d65859-2rn2n_openstack-operators(83f4e7da-0144-44a8-886e-7f8c60f56014): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.569311 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw"] Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.569423 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" podUID="83f4e7da-0144-44a8-886e-7f8c60f56014" Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.569574 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" event={"ID":"6e4d4bd0-d6ac-4268-bc08-86d74adfc33b","Type":"ContainerStarted","Data":"70c61725bb5a2003d7ef08f78221b6f894fc5c437da8d269118adb660d0e0504"} Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.573753 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" event={"ID":"e8f6f9a2-7886-4896-baac-268e88869bb2","Type":"ContainerStarted","Data":"a8cab827a29286b4c7a5a29c0606fc60d28b65ee14d06e5927f9cff799b9ab5f"} Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.574519 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw"] Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.576925 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" event={"ID":"d258bf47-a441-49ad-a3ad-d5c04c615c9c","Type":"ContainerStarted","Data":"0e048e1a29a56f6e7e095e3769573621a9668caedecf68c88bd90edf48000217"} Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.580561 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" event={"ID":"b102209d-5846-40f2-bb20-7022d18b9a28","Type":"ContainerStarted","Data":"49414db82e9b902de18996c2b269ad9eb35dec36e370707d3144f013916193e2"} Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.581923 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" event={"ID":"f901f512-8af4-4e6c-abc8-0fd7d0f26ef3","Type":"ContainerStarted","Data":"33b0175415e1a332a970a104265d5707d3523a1e7556698cbd88d567c38e5c0d"} Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.593980 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kqmxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-f56rw_openstack-operators(066c13ce-1239-494e-bbc6-d175c62c501c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.599867 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" podUID="066c13ce-1239-494e-bbc6-d175c62c501c" Jan 28 18:51:34 crc kubenswrapper[4721]: W0128 18:51:34.602004 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39fc394_2b18_4c7c_a780_0147ddb3a77a.slice/crio-d3f4df8c61aa4719bbf34167104e43952ab985ba1423cb33a9f35546e757e0b4 WatchSource:0}: Error finding container d3f4df8c61aa4719bbf34167104e43952ab985ba1423cb33a9f35546e757e0b4: Status 404 returned error can't find the container with id d3f4df8c61aa4719bbf34167104e43952ab985ba1423cb33a9f35546e757e0b4 Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.605023 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45lml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vprhw_openstack-operators(a39fc394-2b18-4c7c-a780-0147ddb3a77a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.606770 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" podUID="a39fc394-2b18-4c7c-a780-0147ddb3a77a" Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.724043 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:34 crc kubenswrapper[4721]: I0128 18:51:34.724279 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.724497 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.724632 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:36.724574354 +0000 UTC m=+1062.449879914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "webhook-server-cert" not found Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.724944 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:51:34 crc kubenswrapper[4721]: E0128 18:51:34.728332 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:36.725123032 +0000 UTC m=+1062.450428592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "metrics-server-cert" not found Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.615527 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" event={"ID":"073e6433-4ca4-499a-8c82-0fda8211ecd3","Type":"ContainerStarted","Data":"938fee9fcf09087c70cd2a29f31adda279b8e097216f7e840350598310896af3"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.621207 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" event={"ID":"b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6","Type":"ContainerStarted","Data":"a3d9b7851bab1dad6da6e494ff519ad829c6b2c98764582376bb42d935132cb2"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.626930 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" event={"ID":"021232bf-9e53-4907-80a0-702807db3f23","Type":"ContainerStarted","Data":"81609e683dafc62c569baff935c83c709830f6ba5a4a8a3bee50711a4737aaec"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.634464 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" event={"ID":"066c13ce-1239-494e-bbc6-d175c62c501c","Type":"ContainerStarted","Data":"68314081f46e3a00fa786735a6cd188c3404e144bb5ae0d4af8fbd4dc251a456"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.636518 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" event={"ID":"9c28be52-26d0-4dd5-a3ca-ba3d9888dae8","Type":"ContainerStarted","Data":"de5fcb45f1767cb7d9f62bec4caa2a4236d157275308be9251d140e56a04fbf7"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.638243 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" event={"ID":"835d5df3-4ea1-40ce-9bad-325396bfd41f","Type":"ContainerStarted","Data":"562d012a2cdda1465c4a5d81df60c631b10e71d08387a7a229e37deb4d08dd21"} Jan 28 18:51:35 crc kubenswrapper[4721]: E0128 18:51:35.639650 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" podUID="066c13ce-1239-494e-bbc6-d175c62c501c" Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.640345 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" event={"ID":"a39fc394-2b18-4c7c-a780-0147ddb3a77a","Type":"ContainerStarted","Data":"d3f4df8c61aa4719bbf34167104e43952ab985ba1423cb33a9f35546e757e0b4"} Jan 28 18:51:35 crc kubenswrapper[4721]: E0128 18:51:35.641781 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" podUID="a39fc394-2b18-4c7c-a780-0147ddb3a77a" Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.641796 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:35 crc kubenswrapper[4721]: E0128 18:51:35.641916 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:35 crc kubenswrapper[4721]: E0128 18:51:35.641957 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert podName:66d34dd5-6c67-40ec-8fc8-16320a5aef1d nodeName:}" failed. No retries permitted until 2026-01-28 18:51:39.641941951 +0000 UTC m=+1065.367247511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert") pod "infra-operator-controller-manager-79955696d6-fd75h" (UID: "66d34dd5-6c67-40ec-8fc8-16320a5aef1d") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.645681 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" event={"ID":"8e4e395a-5b06-45ea-a2af-8a7a1180fc80","Type":"ContainerStarted","Data":"d76a267ec9c1e7d63ae8fe05ed76cf787f3de66a25a0914934e30ef48ac0e0ee"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.648830 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" event={"ID":"2cea4626-d7bc-4166-9c63-8aa4e6358bd3","Type":"ContainerStarted","Data":"0404d0d6487bc89f682bb362472b3edc5641daba5bcc23c66032bc609e4157f8"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.668651 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" event={"ID":"18c18118-f643-4590-9e07-87bffdb4195b","Type":"ContainerStarted","Data":"11f9205f0d51f3b9d228d989065865b48b86cf555045518b50f58413fdaf9291"} Jan 28 18:51:35 crc kubenswrapper[4721]: I0128 18:51:35.672371 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" event={"ID":"83f4e7da-0144-44a8-886e-7f8c60f56014","Type":"ContainerStarted","Data":"4a7e8c560384bc829b8b48b20973eb421b007fc83d6cf12fd3d3b0374428c2a9"} Jan 28 18:51:35 crc kubenswrapper[4721]: E0128 18:51:35.674137 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" podUID="83f4e7da-0144-44a8-886e-7f8c60f56014" Jan 28 18:51:36 crc kubenswrapper[4721]: I0128 18:51:36.464107 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.464423 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.464575 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert podName:4bc4914a-125f-48f5-a7df-dbc170eaddd9 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:40.464538017 +0000 UTC m=+1066.189843577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" (UID: "4bc4914a-125f-48f5-a7df-dbc170eaddd9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.698267 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" podUID="066c13ce-1239-494e-bbc6-d175c62c501c" Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.698678 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" podUID="a39fc394-2b18-4c7c-a780-0147ddb3a77a" Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.698730 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" podUID="83f4e7da-0144-44a8-886e-7f8c60f56014" Jan 28 18:51:36 crc kubenswrapper[4721]: I0128 18:51:36.779762 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:36 crc kubenswrapper[4721]: I0128 18:51:36.779906 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.780790 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.780890 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:40.78087061 +0000 UTC m=+1066.506176170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "webhook-server-cert" not found Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.780790 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:51:36 crc kubenswrapper[4721]: E0128 18:51:36.780933 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:40.780924782 +0000 UTC m=+1066.506230342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "metrics-server-cert" not found Jan 28 18:51:39 crc kubenswrapper[4721]: I0128 18:51:39.661382 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:39 crc kubenswrapper[4721]: E0128 18:51:39.661694 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:39 crc kubenswrapper[4721]: E0128 18:51:39.662320 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert podName:66d34dd5-6c67-40ec-8fc8-16320a5aef1d nodeName:}" failed. No retries permitted until 2026-01-28 18:51:47.662295873 +0000 UTC m=+1073.387601433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert") pod "infra-operator-controller-manager-79955696d6-fd75h" (UID: "66d34dd5-6c67-40ec-8fc8-16320a5aef1d") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:40 crc kubenswrapper[4721]: I0128 18:51:40.480376 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:40 crc kubenswrapper[4721]: E0128 18:51:40.480668 4721 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:40 crc kubenswrapper[4721]: E0128 18:51:40.480730 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert podName:4bc4914a-125f-48f5-a7df-dbc170eaddd9 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:48.480710719 +0000 UTC m=+1074.206016279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" (UID: "4bc4914a-125f-48f5-a7df-dbc170eaddd9") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:51:40 crc kubenswrapper[4721]: I0128 18:51:40.786660 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:40 crc kubenswrapper[4721]: I0128 18:51:40.786802 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:40 crc kubenswrapper[4721]: E0128 18:51:40.786920 4721 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:51:40 crc kubenswrapper[4721]: E0128 18:51:40.786987 4721 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:51:40 crc kubenswrapper[4721]: E0128 18:51:40.787030 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:48.787004296 +0000 UTC m=+1074.512310046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "metrics-server-cert" not found Jan 28 18:51:40 crc kubenswrapper[4721]: E0128 18:51:40.787067 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs podName:23d3546b-cba0-4c15-a8b0-de9cced9fdf8 nodeName:}" failed. No retries permitted until 2026-01-28 18:51:48.787044147 +0000 UTC m=+1074.512349707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs") pod "openstack-operator-controller-manager-798d8549d8-ztjwv" (UID: "23d3546b-cba0-4c15-a8b0-de9cced9fdf8") : secret "webhook-server-cert" not found Jan 28 18:51:47 crc kubenswrapper[4721]: I0128 18:51:47.726460 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:51:47 crc kubenswrapper[4721]: E0128 18:51:47.726649 4721 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:47 crc kubenswrapper[4721]: E0128 18:51:47.727218 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert podName:66d34dd5-6c67-40ec-8fc8-16320a5aef1d nodeName:}" failed. No retries permitted until 2026-01-28 18:52:03.727192434 +0000 UTC m=+1089.452498004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert") pod "infra-operator-controller-manager-79955696d6-fd75h" (UID: "66d34dd5-6c67-40ec-8fc8-16320a5aef1d") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.550240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.567336 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bc4914a-125f-48f5-a7df-dbc170eaddd9-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8\" (UID: \"4bc4914a-125f-48f5-a7df-dbc170eaddd9\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.809248 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.855064 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.855157 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.860196 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-webhook-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:48 crc kubenswrapper[4721]: I0128 18:51:48.872118 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23d3546b-cba0-4c15-a8b0-de9cced9fdf8-metrics-certs\") pod \"openstack-operator-controller-manager-798d8549d8-ztjwv\" (UID: \"23d3546b-cba0-4c15-a8b0-de9cced9fdf8\") " pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:49 crc kubenswrapper[4721]: E0128 18:51:49.053270 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:429171b44a24e9e4dde46465d90a272d93b15317ea386184d6ad077cc119d3c9" Jan 28 18:51:49 crc kubenswrapper[4721]: E0128 18:51:49.053519 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:429171b44a24e9e4dde46465d90a272d93b15317ea386184d6ad077cc119d3c9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhnkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-587c6bfdcf-r46mm_openstack-operators(6ec8e4f3-a711-43af-81da-91be5695e927): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:49 crc kubenswrapper[4721]: E0128 18:51:49.055475 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" podUID="6ec8e4f3-a711-43af-81da-91be5695e927" Jan 28 18:51:49 crc kubenswrapper[4721]: I0128 18:51:49.060439 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:51:49 crc kubenswrapper[4721]: E0128 18:51:49.900103 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:429171b44a24e9e4dde46465d90a272d93b15317ea386184d6ad077cc119d3c9\\\"\"" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" podUID="6ec8e4f3-a711-43af-81da-91be5695e927" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.190498 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.190876 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mrvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-pt757_openstack-operators(f901f512-8af4-4e6c-abc8-0fd7d0f26ef3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.192182 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" podUID="f901f512-8af4-4e6c-abc8-0fd7d0f26ef3" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.908651 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" podUID="f901f512-8af4-4e6c-abc8-0fd7d0f26ef3" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.983643 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.983898 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7b8lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-958664b5-wrzbl_openstack-operators(7650ad3f-87f7-4c9a-b795-678ebc7edc7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:50 crc kubenswrapper[4721]: E0128 18:51:50.985303 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" podUID="7650ad3f-87f7-4c9a-b795-678ebc7edc7d" Jan 28 18:51:51 crc kubenswrapper[4721]: E0128 18:51:51.914532 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:5f48b6af05a584d3da5c973f83195d999cc151aa0f187cabc8002cb46d60afe5\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" podUID="7650ad3f-87f7-4c9a-b795-678ebc7edc7d" Jan 28 18:51:52 crc kubenswrapper[4721]: E0128 18:51:52.188042 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d" Jan 28 18:51:52 crc kubenswrapper[4721]: E0128 18:51:52.188307 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tq2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66dfbd6f5d-dbf9z_openstack-operators(5f5dbe82-6a18-47da-98e6-00d10a32d1eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:52 crc kubenswrapper[4721]: E0128 18:51:52.189774 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" podUID="5f5dbe82-6a18-47da-98e6-00d10a32d1eb" Jan 28 18:51:52 crc kubenswrapper[4721]: E0128 18:51:52.921811 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:29a3092217e72f1ec8a163ed3d15a0a5ccc5b3117e64c72bf5e68597cc233b3d\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" podUID="5f5dbe82-6a18-47da-98e6-00d10a32d1eb" Jan 28 18:51:53 crc kubenswrapper[4721]: E0128 18:51:53.740019 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad" Jan 28 18:51:53 crc kubenswrapper[4721]: E0128 18:51:53.740268 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjvc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-f6487bd57-c9pmg_openstack-operators(d258bf47-a441-49ad-a3ad-d5c04c615c9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:53 crc kubenswrapper[4721]: E0128 18:51:53.741468 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" podUID="d258bf47-a441-49ad-a3ad-d5c04c615c9c" Jan 28 18:51:53 crc kubenswrapper[4721]: E0128 18:51:53.931010 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:6da7ec7bf701fe1dd489852a16429f163a69073fae67b872dca4b080cc3514ad\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" podUID="d258bf47-a441-49ad-a3ad-d5c04c615c9c" Jan 28 18:51:55 crc kubenswrapper[4721]: E0128 18:51:55.187221 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0" Jan 28 18:51:55 crc kubenswrapper[4721]: E0128 18:51:55.187486 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qfhbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-694c5bfc85-hv7r4_openstack-operators(b102209d-5846-40f2-bb20-7022d18b9a28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:55 crc kubenswrapper[4721]: E0128 18:51:55.188702 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" podUID="b102209d-5846-40f2-bb20-7022d18b9a28" Jan 28 18:51:55 crc kubenswrapper[4721]: E0128 18:51:55.945556 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:22665b40ffeef62d1a612c1f9f0fa8e97ff95085fad123895d786b770f421fc0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" podUID="b102209d-5846-40f2-bb20-7022d18b9a28" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.020359 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.020563 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjv29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-6db5dbd896-7brt7_openstack-operators(6e4d4bd0-d6ac-4268-bc08-86d74adfc33b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.022000 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" podUID="6e4d4bd0-d6ac-4268-bc08-86d74adfc33b" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.699354 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.699647 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lxnhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-6m2fr_openstack-operators(18c18118-f643-4590-9e07-87bffdb4195b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.702050 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" podUID="18c18118-f643-4590-9e07-87bffdb4195b" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.956247 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:8a7e2637765333c555b0b932c2bfc789235aea2c7276961657a03ef1352a7264\\\"\"" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" podUID="6e4d4bd0-d6ac-4268-bc08-86d74adfc33b" Jan 28 18:51:56 crc kubenswrapper[4721]: E0128 18:51:56.956242 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" podUID="18c18118-f643-4590-9e07-87bffdb4195b" Jan 28 18:51:57 crc kubenswrapper[4721]: E0128 18:51:57.304652 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b" Jan 28 18:51:57 crc kubenswrapper[4721]: E0128 18:51:57.304898 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hldh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5c765b4558-r996h_openstack-operators(073e6433-4ca4-499a-8c82-0fda8211ecd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:51:57 crc kubenswrapper[4721]: E0128 18:51:57.306292 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" podUID="073e6433-4ca4-499a-8c82-0fda8211ecd3" Jan 28 18:51:57 crc kubenswrapper[4721]: E0128 18:51:57.963101 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:c7804813a3bba8910a47a5f32bd528335e18397f93cf5f7e7181d3d2c209b59b\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" podUID="073e6433-4ca4-499a-8c82-0fda8211ecd3" Jan 28 18:52:01 crc kubenswrapper[4721]: I0128 18:52:01.225507 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:52:01 crc kubenswrapper[4721]: I0128 18:52:01.225890 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:52:03 crc kubenswrapper[4721]: I0128 18:52:03.729459 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:52:03 crc kubenswrapper[4721]: I0128 18:52:03.735278 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66d34dd5-6c67-40ec-8fc8-16320a5aef1d-cert\") pod \"infra-operator-controller-manager-79955696d6-fd75h\" (UID: \"66d34dd5-6c67-40ec-8fc8-16320a5aef1d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:52:03 crc kubenswrapper[4721]: I0128 18:52:03.846026 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9gf5q" Jan 28 18:52:03 crc kubenswrapper[4721]: I0128 18:52:03.854416 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:52:06 crc kubenswrapper[4721]: E0128 18:52:06.200467 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4" Jan 28 18:52:06 crc kubenswrapper[4721]: E0128 18:52:06.201348 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hklsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-tkgcv_openstack-operators(b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:52:06 crc kubenswrapper[4721]: E0128 18:52:06.202687 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" podUID="b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6" Jan 28 18:52:06 crc kubenswrapper[4721]: E0128 18:52:06.751141 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 28 18:52:06 crc kubenswrapper[4721]: E0128 18:52:06.751531 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kqmxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-f56rw_openstack-operators(066c13ce-1239-494e-bbc6-d175c62c501c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:52:06 crc kubenswrapper[4721]: E0128 18:52:06.752782 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" podUID="066c13ce-1239-494e-bbc6-d175c62c501c" Jan 28 18:52:07 crc kubenswrapper[4721]: E0128 18:52:07.031135 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" podUID="b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6" Jan 28 18:52:07 crc kubenswrapper[4721]: E0128 18:52:07.329034 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 28 18:52:07 crc kubenswrapper[4721]: E0128 18:52:07.329753 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4j8fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-ghpgf_openstack-operators(8e4e395a-5b06-45ea-a2af-8a7a1180fc80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:52:07 crc kubenswrapper[4721]: E0128 18:52:07.331513 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" podUID="8e4e395a-5b06-45ea-a2af-8a7a1180fc80" Jan 28 18:52:08 crc kubenswrapper[4721]: E0128 18:52:08.037286 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" podUID="8e4e395a-5b06-45ea-a2af-8a7a1180fc80" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.077463 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.077541 4721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.077769 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5l9bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-877d65859-2rn2n_openstack-operators(83f4e7da-0144-44a8-886e-7f8c60f56014): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.079043 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" podUID="83f4e7da-0144-44a8-886e-7f8c60f56014" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.740134 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.741031 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45lml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vprhw_openstack-operators(a39fc394-2b18-4c7c-a780-0147ddb3a77a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:52:09 crc kubenswrapper[4721]: E0128 18:52:09.742662 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" podUID="a39fc394-2b18-4c7c-a780-0147ddb3a77a" Jan 28 18:52:10 crc kubenswrapper[4721]: I0128 18:52:10.060095 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" event={"ID":"99e08199-2cc8-4f41-8310-f63c0a021a98","Type":"ContainerStarted","Data":"7bdd55fdcb6e928d8b5f254ea904085a9221b76b466dce0d24d56ab9019c732b"} Jan 28 18:52:10 crc kubenswrapper[4721]: I0128 18:52:10.061257 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:52:10 crc kubenswrapper[4721]: I0128 18:52:10.086910 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" podStartSLOduration=5.365221736 podStartE2EDuration="39.086886236s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.022908141 +0000 UTC m=+1058.748213701" lastFinishedPulling="2026-01-28 18:52:06.744572641 +0000 UTC m=+1092.469878201" observedRunningTime="2026-01-28 18:52:10.083093258 +0000 UTC m=+1095.808398828" watchObservedRunningTime="2026-01-28 18:52:10.086886236 +0000 UTC m=+1095.812191796" Jan 28 18:52:10 crc kubenswrapper[4721]: I0128 18:52:10.251365 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8"] Jan 28 18:52:10 crc kubenswrapper[4721]: W0128 18:52:10.266683 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc4914a_125f_48f5_a7df_dbc170eaddd9.slice/crio-f259b66172a799da42e97056fc83ad57cf4c898914e5bd71c07f67bedfe9c133 WatchSource:0}: Error finding container f259b66172a799da42e97056fc83ad57cf4c898914e5bd71c07f67bedfe9c133: Status 404 returned error can't find the container with id f259b66172a799da42e97056fc83ad57cf4c898914e5bd71c07f67bedfe9c133 Jan 28 18:52:10 crc kubenswrapper[4721]: I0128 18:52:10.326975 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv"] Jan 28 18:52:10 crc kubenswrapper[4721]: I0128 18:52:10.368409 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-fd75h"] Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.078146 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" event={"ID":"9c28be52-26d0-4dd5-a3ca-ba3d9888dae8","Type":"ContainerStarted","Data":"b87101db5629ccdeeabfc0b46c0ef172cb8b59128e366fe44275f350dfe5bbd2"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.078269 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.084499 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" event={"ID":"835d5df3-4ea1-40ce-9bad-325396bfd41f","Type":"ContainerStarted","Data":"9c4c6537a46b8b3178a9bbf30a946d8ea2d798a6e3536a026bac3392e6ca6cf6"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.084652 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.090856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" event={"ID":"7650ad3f-87f7-4c9a-b795-678ebc7edc7d","Type":"ContainerStarted","Data":"34f9b7e9099c5c4083d12cbf436f05641f3348e84e2d91d1ac108bd726b5915f"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.091836 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.125420 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" event={"ID":"e8f6f9a2-7886-4896-baac-268e88869bb2","Type":"ContainerStarted","Data":"48f7156389388bf9ae51d6083ada357ce0f4bb7d9760148ea2e32b2f9463ab69"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.125910 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" podStartSLOduration=7.362050518 podStartE2EDuration="40.125891595s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.541981462 +0000 UTC m=+1060.267287022" lastFinishedPulling="2026-01-28 18:52:07.305822549 +0000 UTC m=+1093.031128099" observedRunningTime="2026-01-28 18:52:11.124708758 +0000 UTC m=+1096.850014338" watchObservedRunningTime="2026-01-28 18:52:11.125891595 +0000 UTC m=+1096.851197155" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.126370 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.149766 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" event={"ID":"5f5dbe82-6a18-47da-98e6-00d10a32d1eb","Type":"ContainerStarted","Data":"a885559799b048c2030868f54d3ce7278654f293a8e226506b698797851db578"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.150608 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.155376 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" event={"ID":"4bc4914a-125f-48f5-a7df-dbc170eaddd9","Type":"ContainerStarted","Data":"f259b66172a799da42e97056fc83ad57cf4c898914e5bd71c07f67bedfe9c133"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.178437 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" event={"ID":"6ec8e4f3-a711-43af-81da-91be5695e927","Type":"ContainerStarted","Data":"42a04a73a5ab874b81ab10b498cbbd262887b453ac4a9341001f9b52bd1071d0"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.179294 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.180332 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" podStartSLOduration=4.578283296 podStartE2EDuration="40.18029972s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.095535233 +0000 UTC m=+1059.820840793" lastFinishedPulling="2026-01-28 18:52:09.697551657 +0000 UTC m=+1095.422857217" observedRunningTime="2026-01-28 18:52:11.169308786 +0000 UTC m=+1096.894614366" watchObservedRunningTime="2026-01-28 18:52:11.18029972 +0000 UTC m=+1096.905605290" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.186194 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" event={"ID":"2cea4626-d7bc-4166-9c63-8aa4e6358bd3","Type":"ContainerStarted","Data":"d43937746d05bf6e9431c218fda496da412f5caae939bdfcdf6eeddf826973ba"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.186973 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.190942 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" event={"ID":"d258bf47-a441-49ad-a3ad-d5c04c615c9c","Type":"ContainerStarted","Data":"ff0d4778d8db25747d81b5ceb619d18179ad3f2a71b753beb7070d5caa8ac660"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.191624 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.210044 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" podStartSLOduration=8.031534627 podStartE2EDuration="40.210026101s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.566139299 +0000 UTC m=+1060.291444859" lastFinishedPulling="2026-01-28 18:52:06.744630773 +0000 UTC m=+1092.469936333" observedRunningTime="2026-01-28 18:52:11.204491408 +0000 UTC m=+1096.929796978" watchObservedRunningTime="2026-01-28 18:52:11.210026101 +0000 UTC m=+1096.935331661" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.232524 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" event={"ID":"b102209d-5846-40f2-bb20-7022d18b9a28","Type":"ContainerStarted","Data":"79d7e0208609304f2650581ab5a98ad1fbcb2194f0efe2f61a555fe84e9296c6"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.232879 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.236819 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" event={"ID":"66d34dd5-6c67-40ec-8fc8-16320a5aef1d","Type":"ContainerStarted","Data":"602cb6ba7ca6d17b2f498813d9dbffe53af475a7c42e2713af10d5f881f794b3"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.255835 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" podStartSLOduration=6.905607194 podStartE2EDuration="40.255816596s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.95570227 +0000 UTC m=+1059.681007830" lastFinishedPulling="2026-01-28 18:52:07.305911662 +0000 UTC m=+1093.031217232" observedRunningTime="2026-01-28 18:52:11.252779171 +0000 UTC m=+1096.978084731" watchObservedRunningTime="2026-01-28 18:52:11.255816596 +0000 UTC m=+1096.981122156" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.256975 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" event={"ID":"f901f512-8af4-4e6c-abc8-0fd7d0f26ef3","Type":"ContainerStarted","Data":"a39495aab34a89f9b79b4f57dc792c7e1b81c39555d9a07a511e5df936125efd"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.257377 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.278588 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" event={"ID":"23d3546b-cba0-4c15-a8b0-de9cced9fdf8","Type":"ContainerStarted","Data":"08925405ad307598f4c0617ae9b4304b17f465c0fce7a75bbffe6a3c40b3c12e"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.278654 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" event={"ID":"23d3546b-cba0-4c15-a8b0-de9cced9fdf8","Type":"ContainerStarted","Data":"85e918614c0565e97100ca950a17a845211c75aed45c2f5f37124287a3d3cdb8"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.279467 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.291921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" event={"ID":"021232bf-9e53-4907-80a0-702807db3f23","Type":"ContainerStarted","Data":"8bf442b2e5b06e69ec61eaaae01272448bfac28307cd58a6bc8b70526092eed7"} Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.292244 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.308803 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" podStartSLOduration=3.738551052 podStartE2EDuration="40.308773936s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.252524746 +0000 UTC m=+1058.977830306" lastFinishedPulling="2026-01-28 18:52:09.82274763 +0000 UTC m=+1095.548053190" observedRunningTime="2026-01-28 18:52:11.279083116 +0000 UTC m=+1097.004388686" watchObservedRunningTime="2026-01-28 18:52:11.308773936 +0000 UTC m=+1097.034079496" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.326362 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" podStartSLOduration=4.12907872 podStartE2EDuration="40.326335206s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.507074903 +0000 UTC m=+1059.232380463" lastFinishedPulling="2026-01-28 18:52:09.704331389 +0000 UTC m=+1095.429636949" observedRunningTime="2026-01-28 18:52:11.322574408 +0000 UTC m=+1097.047879978" watchObservedRunningTime="2026-01-28 18:52:11.326335206 +0000 UTC m=+1097.051640766" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.355715 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" podStartSLOduration=7.563901504 podStartE2EDuration="40.355694556s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.515048149 +0000 UTC m=+1060.240353719" lastFinishedPulling="2026-01-28 18:52:07.306841211 +0000 UTC m=+1093.032146771" observedRunningTime="2026-01-28 18:52:11.351699571 +0000 UTC m=+1097.077005131" watchObservedRunningTime="2026-01-28 18:52:11.355694556 +0000 UTC m=+1097.081000116" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.398259 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" podStartSLOduration=4.07098412 podStartE2EDuration="40.398122646s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.445804873 +0000 UTC m=+1059.171110433" lastFinishedPulling="2026-01-28 18:52:09.772943399 +0000 UTC m=+1095.498248959" observedRunningTime="2026-01-28 18:52:11.386664786 +0000 UTC m=+1097.111970366" watchObservedRunningTime="2026-01-28 18:52:11.398122646 +0000 UTC m=+1097.123428206" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.425517 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" podStartSLOduration=7.635738663 podStartE2EDuration="40.425492863s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.517063891 +0000 UTC m=+1060.242369451" lastFinishedPulling="2026-01-28 18:52:07.306818101 +0000 UTC m=+1093.032123651" observedRunningTime="2026-01-28 18:52:11.414819479 +0000 UTC m=+1097.140125059" watchObservedRunningTime="2026-01-28 18:52:11.425492863 +0000 UTC m=+1097.150798423" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.549758 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" podStartSLOduration=4.784089675 podStartE2EDuration="40.549730597s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.931913885 +0000 UTC m=+1059.657219455" lastFinishedPulling="2026-01-28 18:52:09.697554817 +0000 UTC m=+1095.422860377" observedRunningTime="2026-01-28 18:52:11.479680741 +0000 UTC m=+1097.204986311" watchObservedRunningTime="2026-01-28 18:52:11.549730597 +0000 UTC m=+1097.275036167" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.622126 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" podStartSLOduration=39.622101484 podStartE2EDuration="39.622101484s" podCreationTimestamp="2026-01-28 18:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:52:11.551872654 +0000 UTC m=+1097.277178224" watchObservedRunningTime="2026-01-28 18:52:11.622101484 +0000 UTC m=+1097.347407044" Jan 28 18:52:11 crc kubenswrapper[4721]: I0128 18:52:11.636529 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" podStartSLOduration=4.56528877 podStartE2EDuration="40.636509126s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.116634184 +0000 UTC m=+1059.841939744" lastFinishedPulling="2026-01-28 18:52:10.18785454 +0000 UTC m=+1095.913160100" observedRunningTime="2026-01-28 18:52:11.607983392 +0000 UTC m=+1097.333288952" watchObservedRunningTime="2026-01-28 18:52:11.636509126 +0000 UTC m=+1097.361814686" Jan 28 18:52:12 crc kubenswrapper[4721]: I0128 18:52:12.311357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" event={"ID":"18c18118-f643-4590-9e07-87bffdb4195b","Type":"ContainerStarted","Data":"3674597f3b40188c0166f07cab2b8420a5ab50c81d4deade3fcb022f4ecf451e"} Jan 28 18:52:12 crc kubenswrapper[4721]: I0128 18:52:12.324070 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:52:12 crc kubenswrapper[4721]: I0128 18:52:12.353004 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" podStartSLOduration=4.44187103 podStartE2EDuration="41.352982197s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.566311764 +0000 UTC m=+1060.291617324" lastFinishedPulling="2026-01-28 18:52:11.477422931 +0000 UTC m=+1097.202728491" observedRunningTime="2026-01-28 18:52:12.348095604 +0000 UTC m=+1098.073401174" watchObservedRunningTime="2026-01-28 18:52:12.352982197 +0000 UTC m=+1098.078287757" Jan 28 18:52:13 crc kubenswrapper[4721]: I0128 18:52:13.324681 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" event={"ID":"073e6433-4ca4-499a-8c82-0fda8211ecd3","Type":"ContainerStarted","Data":"38b5c751ef923804b221bf7f6ce66e0270f57945448663c54e81cbdced066119"} Jan 28 18:52:13 crc kubenswrapper[4721]: I0128 18:52:13.326270 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:52:13 crc kubenswrapper[4721]: I0128 18:52:13.331375 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" event={"ID":"6e4d4bd0-d6ac-4268-bc08-86d74adfc33b","Type":"ContainerStarted","Data":"b0f9dc8f0ee649c398d4ed5b3a0ee6e49501a7ec688164bb6e7e6037f953c3a6"} Jan 28 18:52:13 crc kubenswrapper[4721]: I0128 18:52:13.352478 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" podStartSLOduration=4.046526863 podStartE2EDuration="42.352454477s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.541643571 +0000 UTC m=+1060.266949131" lastFinishedPulling="2026-01-28 18:52:12.847571185 +0000 UTC m=+1098.572876745" observedRunningTime="2026-01-28 18:52:13.350534537 +0000 UTC m=+1099.075840097" watchObservedRunningTime="2026-01-28 18:52:13.352454477 +0000 UTC m=+1099.077760037" Jan 28 18:52:13 crc kubenswrapper[4721]: I0128 18:52:13.390263 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" podStartSLOduration=3.698473517 podStartE2EDuration="42.390234601s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:33.764881491 +0000 UTC m=+1059.490187061" lastFinishedPulling="2026-01-28 18:52:12.456642585 +0000 UTC m=+1098.181948145" observedRunningTime="2026-01-28 18:52:13.382530759 +0000 UTC m=+1099.107836319" watchObservedRunningTime="2026-01-28 18:52:13.390234601 +0000 UTC m=+1099.115540161" Jan 28 18:52:18 crc kubenswrapper[4721]: I0128 18:52:18.370601 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" event={"ID":"4bc4914a-125f-48f5-a7df-dbc170eaddd9","Type":"ContainerStarted","Data":"7f274b4c098b65d94b7cbbf7ee72285ffbf1cc3205dc8b3438f8c8e240324c79"} Jan 28 18:52:18 crc kubenswrapper[4721]: I0128 18:52:18.372479 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:52:18 crc kubenswrapper[4721]: I0128 18:52:18.377010 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" event={"ID":"66d34dd5-6c67-40ec-8fc8-16320a5aef1d","Type":"ContainerStarted","Data":"557fc7886bbffccbc0e06217cf19a32f3ef1d043358c8a9686972e86216d01a0"} Jan 28 18:52:18 crc kubenswrapper[4721]: I0128 18:52:18.377887 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:52:18 crc kubenswrapper[4721]: I0128 18:52:18.403850 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" podStartSLOduration=40.181940727 podStartE2EDuration="47.403823376s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:52:10.270113328 +0000 UTC m=+1095.995418888" lastFinishedPulling="2026-01-28 18:52:17.491995977 +0000 UTC m=+1103.217301537" observedRunningTime="2026-01-28 18:52:18.396788425 +0000 UTC m=+1104.122093995" watchObservedRunningTime="2026-01-28 18:52:18.403823376 +0000 UTC m=+1104.129128936" Jan 28 18:52:18 crc kubenswrapper[4721]: I0128 18:52:18.417355 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" podStartSLOduration=40.313754617 podStartE2EDuration="47.417332849s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:52:10.406313326 +0000 UTC m=+1096.131618886" lastFinishedPulling="2026-01-28 18:52:17.509891568 +0000 UTC m=+1103.235197118" observedRunningTime="2026-01-28 18:52:18.415278735 +0000 UTC m=+1104.140584285" watchObservedRunningTime="2026-01-28 18:52:18.417332849 +0000 UTC m=+1104.142638409" Jan 28 18:52:19 crc kubenswrapper[4721]: I0128 18:52:19.066734 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-798d8549d8-ztjwv" Jan 28 18:52:19 crc kubenswrapper[4721]: I0128 18:52:19.386921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" event={"ID":"b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6","Type":"ContainerStarted","Data":"a74b7127cb560226df882890bd5765d4133086a4f01ba459ad3f6b9468dc5f76"} Jan 28 18:52:19 crc kubenswrapper[4721]: I0128 18:52:19.387244 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:52:19 crc kubenswrapper[4721]: I0128 18:52:19.406437 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" podStartSLOduration=2.911643203 podStartE2EDuration="47.406414985s" podCreationTimestamp="2026-01-28 18:51:32 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.553653088 +0000 UTC m=+1060.278958648" lastFinishedPulling="2026-01-28 18:52:19.04842487 +0000 UTC m=+1104.773730430" observedRunningTime="2026-01-28 18:52:19.404032839 +0000 UTC m=+1105.129338419" watchObservedRunningTime="2026-01-28 18:52:19.406414985 +0000 UTC m=+1105.131720545" Jan 28 18:52:20 crc kubenswrapper[4721]: I0128 18:52:20.396040 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" event={"ID":"8e4e395a-5b06-45ea-a2af-8a7a1180fc80","Type":"ContainerStarted","Data":"2e696b3c4eb979b8b92776777b60f80ef3d4d47680bea796621b8ad98a0459c7"} Jan 28 18:52:20 crc kubenswrapper[4721]: I0128 18:52:20.396602 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:52:20 crc kubenswrapper[4721]: I0128 18:52:20.415539 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" podStartSLOduration=3.992292665 podStartE2EDuration="49.415520857s" podCreationTimestamp="2026-01-28 18:51:31 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.542489118 +0000 UTC m=+1060.267794678" lastFinishedPulling="2026-01-28 18:52:19.96571731 +0000 UTC m=+1105.691022870" observedRunningTime="2026-01-28 18:52:20.409133437 +0000 UTC m=+1106.134439017" watchObservedRunningTime="2026-01-28 18:52:20.415520857 +0000 UTC m=+1106.140826417" Jan 28 18:52:21 crc kubenswrapper[4721]: E0128 18:52:21.531992 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" podUID="066c13ce-1239-494e-bbc6-d175c62c501c" Jan 28 18:52:21 crc kubenswrapper[4721]: I0128 18:52:21.848233 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6bc7f4f4cf-pv6ph" Jan 28 18:52:21 crc kubenswrapper[4721]: I0128 18:52:21.863058 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-f6487bd57-c9pmg" Jan 28 18:52:21 crc kubenswrapper[4721]: I0128 18:52:21.912070 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66dfbd6f5d-dbf9z" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.227446 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.229320 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-7brt7" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.274110 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-587c6bfdcf-r46mm" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.275503 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6978b79747-vc75z" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.326590 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-6m2fr" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.328715 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-765668569f-mjxvn" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.372201 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-pt757" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.403198 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-694c5bfc85-hv7r4" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.411310 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-958664b5-wrzbl" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.585451 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5c765b4558-r996h" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.788687 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-js7f2" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.822954 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gdb9m" Jan 28 18:52:22 crc kubenswrapper[4721]: I0128 18:52:22.875601 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-9sqtl" Jan 28 18:52:23 crc kubenswrapper[4721]: E0128 18:52:23.530663 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.246:5001/openstack-k8s-operators/telemetry-operator:774b657c4a2d169eb939c51d71a146bf4a44e93b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" podUID="83f4e7da-0144-44a8-886e-7f8c60f56014" Jan 28 18:52:23 crc kubenswrapper[4721]: I0128 18:52:23.862099 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-fd75h" Jan 28 18:52:25 crc kubenswrapper[4721]: E0128 18:52:25.534765 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" podUID="a39fc394-2b18-4c7c-a780-0147ddb3a77a" Jan 28 18:52:28 crc kubenswrapper[4721]: I0128 18:52:28.815901 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8" Jan 28 18:52:31 crc kubenswrapper[4721]: I0128 18:52:31.225234 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:52:31 crc kubenswrapper[4721]: I0128 18:52:31.225621 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:52:32 crc kubenswrapper[4721]: I0128 18:52:32.549333 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-ghpgf" Jan 28 18:52:32 crc kubenswrapper[4721]: I0128 18:52:32.948914 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-tkgcv" Jan 28 18:52:37 crc kubenswrapper[4721]: I0128 18:52:37.553391 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" event={"ID":"83f4e7da-0144-44a8-886e-7f8c60f56014","Type":"ContainerStarted","Data":"e4b3c26818d060b1756e7a59255d99f6a804766f7e24458508b105f39cb0a52d"} Jan 28 18:52:37 crc kubenswrapper[4721]: I0128 18:52:37.554221 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:52:37 crc kubenswrapper[4721]: I0128 18:52:37.574811 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" podStartSLOduration=3.368085289 podStartE2EDuration="1m5.574790843s" podCreationTimestamp="2026-01-28 18:51:32 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.566391317 +0000 UTC m=+1060.291696877" lastFinishedPulling="2026-01-28 18:52:36.773096871 +0000 UTC m=+1122.498402431" observedRunningTime="2026-01-28 18:52:37.568530066 +0000 UTC m=+1123.293835656" watchObservedRunningTime="2026-01-28 18:52:37.574790843 +0000 UTC m=+1123.300096403" Jan 28 18:52:38 crc kubenswrapper[4721]: I0128 18:52:38.563510 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" event={"ID":"a39fc394-2b18-4c7c-a780-0147ddb3a77a","Type":"ContainerStarted","Data":"c46f9c50c29349dc8370b1706204261dce88fd228ce080875eac3b45627a4ec0"} Jan 28 18:52:38 crc kubenswrapper[4721]: I0128 18:52:38.567334 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" event={"ID":"066c13ce-1239-494e-bbc6-d175c62c501c","Type":"ContainerStarted","Data":"afb52b2dcb3338387dfe077637794ef95a2ab8eccc4f5a98c836a306d64257a9"} Jan 28 18:52:38 crc kubenswrapper[4721]: I0128 18:52:38.567749 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:52:38 crc kubenswrapper[4721]: I0128 18:52:38.588623 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vprhw" podStartSLOduration=3.749706104 podStartE2EDuration="1m6.588601603s" podCreationTimestamp="2026-01-28 18:51:32 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.604825932 +0000 UTC m=+1060.330131492" lastFinishedPulling="2026-01-28 18:52:37.443721441 +0000 UTC m=+1123.169026991" observedRunningTime="2026-01-28 18:52:38.586047423 +0000 UTC m=+1124.311352983" watchObservedRunningTime="2026-01-28 18:52:38.588601603 +0000 UTC m=+1124.313907163" Jan 28 18:52:38 crc kubenswrapper[4721]: I0128 18:52:38.610934 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" podStartSLOduration=3.760227654 podStartE2EDuration="1m6.610908951s" podCreationTimestamp="2026-01-28 18:51:32 +0000 UTC" firstStartedPulling="2026-01-28 18:51:34.593850798 +0000 UTC m=+1060.319156358" lastFinishedPulling="2026-01-28 18:52:37.444532095 +0000 UTC m=+1123.169837655" observedRunningTime="2026-01-28 18:52:38.609350462 +0000 UTC m=+1124.334656042" watchObservedRunningTime="2026-01-28 18:52:38.610908951 +0000 UTC m=+1124.336214511" Jan 28 18:52:42 crc kubenswrapper[4721]: I0128 18:52:42.947276 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-877d65859-2rn2n" Jan 28 18:52:43 crc kubenswrapper[4721]: I0128 18:52:43.369899 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f56rw" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.389414 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nzfk5"] Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.391260 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.393687 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.393699 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-n54hq" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.394333 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.395805 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.417939 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nzfk5"] Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.459650 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c7fb27-5095-4102-89b3-5b2e10ff6347-config\") pod \"dnsmasq-dns-675f4bcbfc-nzfk5\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.459798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdjd\" (UniqueName: \"kubernetes.io/projected/b1c7fb27-5095-4102-89b3-5b2e10ff6347-kube-api-access-fvdjd\") pod \"dnsmasq-dns-675f4bcbfc-nzfk5\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.477649 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qpxgp"] Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.479757 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.492669 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.494641 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qpxgp"] Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.561692 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.561744 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-config\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.561926 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdjd\" (UniqueName: \"kubernetes.io/projected/b1c7fb27-5095-4102-89b3-5b2e10ff6347-kube-api-access-fvdjd\") pod \"dnsmasq-dns-675f4bcbfc-nzfk5\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.562073 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c7fb27-5095-4102-89b3-5b2e10ff6347-config\") pod \"dnsmasq-dns-675f4bcbfc-nzfk5\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.562158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2vn\" (UniqueName: \"kubernetes.io/projected/40509d95-6418-4f4c-96a3-374874891872-kube-api-access-md2vn\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.563577 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c7fb27-5095-4102-89b3-5b2e10ff6347-config\") pod \"dnsmasq-dns-675f4bcbfc-nzfk5\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.587485 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdjd\" (UniqueName: \"kubernetes.io/projected/b1c7fb27-5095-4102-89b3-5b2e10ff6347-kube-api-access-fvdjd\") pod \"dnsmasq-dns-675f4bcbfc-nzfk5\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.663652 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2vn\" (UniqueName: \"kubernetes.io/projected/40509d95-6418-4f4c-96a3-374874891872-kube-api-access-md2vn\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.663730 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.663753 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-config\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.664717 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-config\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.665579 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.685538 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2vn\" (UniqueName: \"kubernetes.io/projected/40509d95-6418-4f4c-96a3-374874891872-kube-api-access-md2vn\") pod \"dnsmasq-dns-78dd6ddcc-qpxgp\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.728223 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:52:58 crc kubenswrapper[4721]: I0128 18:52:58.808888 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:52:59 crc kubenswrapper[4721]: I0128 18:52:59.251721 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nzfk5"] Jan 28 18:52:59 crc kubenswrapper[4721]: I0128 18:52:59.394506 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qpxgp"] Jan 28 18:52:59 crc kubenswrapper[4721]: W0128 18:52:59.396150 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40509d95_6418_4f4c_96a3_374874891872.slice/crio-167c4f293fb360c21132188a3e3712790bc7c03d0372ff782ae83032846b64b7 WatchSource:0}: Error finding container 167c4f293fb360c21132188a3e3712790bc7c03d0372ff782ae83032846b64b7: Status 404 returned error can't find the container with id 167c4f293fb360c21132188a3e3712790bc7c03d0372ff782ae83032846b64b7 Jan 28 18:52:59 crc kubenswrapper[4721]: I0128 18:52:59.740964 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" event={"ID":"40509d95-6418-4f4c-96a3-374874891872","Type":"ContainerStarted","Data":"167c4f293fb360c21132188a3e3712790bc7c03d0372ff782ae83032846b64b7"} Jan 28 18:52:59 crc kubenswrapper[4721]: I0128 18:52:59.742196 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" event={"ID":"b1c7fb27-5095-4102-89b3-5b2e10ff6347","Type":"ContainerStarted","Data":"37447b1962108057963a9a298b62fbcb5aa50662fc6410b7a4882cb8c516bc32"} Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.231006 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.231407 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.231466 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.232131 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf577cfdc0b7c29bec411ba83a64318b81b8ea16d7ec474c8974a1dbea166b1d"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.232205 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://cf577cfdc0b7c29bec411ba83a64318b81b8ea16d7ec474c8974a1dbea166b1d" gracePeriod=600 Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.337854 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nzfk5"] Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.391378 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-59s7w"] Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.393466 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.413530 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-59s7w"] Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.527088 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-dns-svc\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.527236 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl78z\" (UniqueName: \"kubernetes.io/projected/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-kube-api-access-xl78z\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.527308 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-config\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.628188 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-dns-svc\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.628282 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl78z\" (UniqueName: \"kubernetes.io/projected/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-kube-api-access-xl78z\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.628365 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-config\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.629261 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-config\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.629838 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-dns-svc\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.677293 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl78z\" (UniqueName: \"kubernetes.io/projected/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-kube-api-access-xl78z\") pod \"dnsmasq-dns-666b6646f7-59s7w\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.729600 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.790362 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="cf577cfdc0b7c29bec411ba83a64318b81b8ea16d7ec474c8974a1dbea166b1d" exitCode=0 Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.790423 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"cf577cfdc0b7c29bec411ba83a64318b81b8ea16d7ec474c8974a1dbea166b1d"} Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.790695 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"550b2d16893b3820a2b08c43cf1c1d92f4cff5c63dda2753410f76f8e772711f"} Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.790720 4721 scope.go:117] "RemoveContainer" containerID="05b5a08257768ab03feca7d9732c3a599d23c36babbadf35cb5007f36020b414" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.843437 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qpxgp"] Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.884452 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vc8rk"] Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.888586 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:01 crc kubenswrapper[4721]: I0128 18:53:01.967291 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vc8rk"] Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.042445 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-config\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.042681 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.042811 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjld\" (UniqueName: \"kubernetes.io/projected/aecb4886-3e12-46f5-b2dd-20260e64e4c7-kube-api-access-tfjld\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.144365 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-config\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.145020 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.145126 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfjld\" (UniqueName: \"kubernetes.io/projected/aecb4886-3e12-46f5-b2dd-20260e64e4c7-kube-api-access-tfjld\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.145488 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-config\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.146219 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.167886 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfjld\" (UniqueName: \"kubernetes.io/projected/aecb4886-3e12-46f5-b2dd-20260e64e4c7-kube-api-access-tfjld\") pod \"dnsmasq-dns-57d769cc4f-vc8rk\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.229669 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.491978 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-59s7w"] Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.591362 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.592925 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.596099 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.597573 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.598051 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ppn4t" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.599222 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.603699 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.605920 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.618604 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.631550 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.665700 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.665808 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.665852 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.665915 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.665958 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec1e1de9-b144-4c34-bb14-4c0382670f45-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.665980 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk8vx\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-kube-api-access-dk8vx\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.666005 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.666033 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.666097 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.666129 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec1e1de9-b144-4c34-bb14-4c0382670f45-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.666155 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.767912 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.767994 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec1e1de9-b144-4c34-bb14-4c0382670f45-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768070 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768100 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768129 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768367 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768405 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec1e1de9-b144-4c34-bb14-4c0382670f45-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768425 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk8vx\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-kube-api-access-dk8vx\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768446 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.768471 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.769467 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.770249 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.770875 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.770895 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.778682 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.778966 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.779012 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47cccac3c6f853ec6e999145e5e217a1590d8accec8418bbeb0b34e74219920b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.781113 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.783452 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec1e1de9-b144-4c34-bb14-4c0382670f45-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.784008 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.792077 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec1e1de9-b144-4c34-bb14-4c0382670f45-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.799061 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk8vx\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-kube-api-access-dk8vx\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.822148 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" event={"ID":"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88","Type":"ContainerStarted","Data":"1b3942323fd7702c2b2a09db8e64a799961984f8dc00591f6e787a8604665da6"} Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.876158 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " pod="openstack/rabbitmq-server-0" Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.885002 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vc8rk"] Jan 28 18:53:02 crc kubenswrapper[4721]: I0128 18:53:02.951390 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.021241 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.022625 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.029823 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.030117 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.030353 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.030428 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qmjxb" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.034343 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.034526 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.034617 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.057224 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078501 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078576 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078601 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078619 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsv5k\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-kube-api-access-vsv5k\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078664 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078695 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078721 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078746 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc56a986-671d-4f17-8386-939d0fd9394a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078782 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078807 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc56a986-671d-4f17-8386-939d0fd9394a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.078826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.183260 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184110 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184139 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184184 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc56a986-671d-4f17-8386-939d0fd9394a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184294 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc56a986-671d-4f17-8386-939d0fd9394a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184356 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184402 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184450 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184497 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.184527 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsv5k\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-kube-api-access-vsv5k\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.185326 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.185604 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.186935 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.187426 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.187732 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.196098 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.196157 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5bc89d46b155de3097f77aee48e0273231559873bb6737e5f04966de38376c61/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.207597 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.208202 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc56a986-671d-4f17-8386-939d0fd9394a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.208590 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc56a986-671d-4f17-8386-939d0fd9394a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.208788 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.214118 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsv5k\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-kube-api-access-vsv5k\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.285127 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.375263 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.666231 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.915947 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" event={"ID":"aecb4886-3e12-46f5-b2dd-20260e64e4c7","Type":"ContainerStarted","Data":"a40c760cb73f67b96a0da729d622380413a6e5bec210167a9091a79462284907"} Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.925571 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec1e1de9-b144-4c34-bb14-4c0382670f45","Type":"ContainerStarted","Data":"e545f4e4f58ca348dd389b0f6a5e72f9d095e41aa4325456d5ee98c37276de6a"} Jan 28 18:53:03 crc kubenswrapper[4721]: I0128 18:53:03.990202 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:53:04 crc kubenswrapper[4721]: W0128 18:53:04.023105 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc56a986_671d_4f17_8386_939d0fd9394a.slice/crio-0c2415fd5efcbdc5cb723cd10869129c473c035c0b3f611f610b61446aaa3855 WatchSource:0}: Error finding container 0c2415fd5efcbdc5cb723cd10869129c473c035c0b3f611f610b61446aaa3855: Status 404 returned error can't find the container with id 0c2415fd5efcbdc5cb723cd10869129c473c035c0b3f611f610b61446aaa3855 Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.157700 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.159530 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.165447 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.166411 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.166450 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.166567 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fnqmn" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.177201 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.191690 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.315647 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e740af0-cd0c-4f3e-8be1-facce1656583-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.315716 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.315889 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-config-data-default\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.315963 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e740af0-cd0c-4f3e-8be1-facce1656583-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.316331 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.316677 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msxg\" (UniqueName: \"kubernetes.io/projected/0e740af0-cd0c-4f3e-8be1-facce1656583-kube-api-access-2msxg\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.316741 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e740af0-cd0c-4f3e-8be1-facce1656583-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.316936 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-kolla-config\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.419126 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e740af0-cd0c-4f3e-8be1-facce1656583-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.419810 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.419906 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-config-data-default\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.419957 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e740af0-cd0c-4f3e-8be1-facce1656583-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.419979 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0e740af0-cd0c-4f3e-8be1-facce1656583-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.419993 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.420140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msxg\" (UniqueName: \"kubernetes.io/projected/0e740af0-cd0c-4f3e-8be1-facce1656583-kube-api-access-2msxg\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.420198 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e740af0-cd0c-4f3e-8be1-facce1656583-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.420365 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-kolla-config\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.421323 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-kolla-config\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.421837 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-config-data-default\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.421871 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e740af0-cd0c-4f3e-8be1-facce1656583-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.428413 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.428481 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2efa7e1c53c6a0afe17917b1487449964ade8710319fe5451fcf3329c1e130e6/globalmount\"" pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.430945 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e740af0-cd0c-4f3e-8be1-facce1656583-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.431440 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e740af0-cd0c-4f3e-8be1-facce1656583-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.449021 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msxg\" (UniqueName: \"kubernetes.io/projected/0e740af0-cd0c-4f3e-8be1-facce1656583-kube-api-access-2msxg\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.472196 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bdaaf48d-9d20-46eb-8e55-648e979b5d04\") pod \"openstack-galera-0\" (UID: \"0e740af0-cd0c-4f3e-8be1-facce1656583\") " pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.504022 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.931738 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:53:04 crc kubenswrapper[4721]: I0128 18:53:04.945164 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc56a986-671d-4f17-8386-939d0fd9394a","Type":"ContainerStarted","Data":"0c2415fd5efcbdc5cb723cd10869129c473c035c0b3f611f610b61446aaa3855"} Jan 28 18:53:04 crc kubenswrapper[4721]: W0128 18:53:04.959231 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e740af0_cd0c_4f3e_8be1_facce1656583.slice/crio-1fbb454a93552f5423a2d3b9e1d2e660922e77ca643fb4847a54f17ef2e60b53 WatchSource:0}: Error finding container 1fbb454a93552f5423a2d3b9e1d2e660922e77ca643fb4847a54f17ef2e60b53: Status 404 returned error can't find the container with id 1fbb454a93552f5423a2d3b9e1d2e660922e77ca643fb4847a54f17ef2e60b53 Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.323925 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.329132 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.333763 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.333840 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.333762 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.334065 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tsnms" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.341744 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447386 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b26873-8c7a-4ea7-b334-873b01cc5d84-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447540 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447599 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447637 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzcm\" (UniqueName: \"kubernetes.io/projected/00b26873-8c7a-4ea7-b334-873b01cc5d84-kube-api-access-vbzcm\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447671 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447724 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0744d531-5088-42fc-a72d-c81bf5490f52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0744d531-5088-42fc-a72d-c81bf5490f52\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447756 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00b26873-8c7a-4ea7-b334-873b01cc5d84-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.447784 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b26873-8c7a-4ea7-b334-873b01cc5d84-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.552275 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b26873-8c7a-4ea7-b334-873b01cc5d84-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.552654 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.554755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.554996 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzcm\" (UniqueName: \"kubernetes.io/projected/00b26873-8c7a-4ea7-b334-873b01cc5d84-kube-api-access-vbzcm\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.555070 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.555231 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0744d531-5088-42fc-a72d-c81bf5490f52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0744d531-5088-42fc-a72d-c81bf5490f52\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.555298 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00b26873-8c7a-4ea7-b334-873b01cc5d84-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.555373 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b26873-8c7a-4ea7-b334-873b01cc5d84-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.557519 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.558303 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.558565 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00b26873-8c7a-4ea7-b334-873b01cc5d84-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.562402 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.562845 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0744d531-5088-42fc-a72d-c81bf5490f52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0744d531-5088-42fc-a72d-c81bf5490f52\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8fe4c33b03da55231c9279cbb0488467cacf6dad83a3b708aacb136eef78aca/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.562432 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b26873-8c7a-4ea7-b334-873b01cc5d84-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.564219 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b26873-8c7a-4ea7-b334-873b01cc5d84-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.564860 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00b26873-8c7a-4ea7-b334-873b01cc5d84-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.583683 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzcm\" (UniqueName: \"kubernetes.io/projected/00b26873-8c7a-4ea7-b334-873b01cc5d84-kube-api-access-vbzcm\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.631955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0744d531-5088-42fc-a72d-c81bf5490f52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0744d531-5088-42fc-a72d-c81bf5490f52\") pod \"openstack-cell1-galera-0\" (UID: \"00b26873-8c7a-4ea7-b334-873b01cc5d84\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.661667 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.698242 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.699691 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.703857 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.704042 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5zlmg" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.714738 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.717588 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.870555 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be00819-ddfd-47d6-a7fc-430607636883-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.870619 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be00819-ddfd-47d6-a7fc-430607636883-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.870683 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqn6s\" (UniqueName: \"kubernetes.io/projected/7be00819-ddfd-47d6-a7fc-430607636883-kube-api-access-gqn6s\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.870708 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7be00819-ddfd-47d6-a7fc-430607636883-kolla-config\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.870728 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7be00819-ddfd-47d6-a7fc-430607636883-config-data\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.969566 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0e740af0-cd0c-4f3e-8be1-facce1656583","Type":"ContainerStarted","Data":"1fbb454a93552f5423a2d3b9e1d2e660922e77ca643fb4847a54f17ef2e60b53"} Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.974065 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be00819-ddfd-47d6-a7fc-430607636883-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.974201 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be00819-ddfd-47d6-a7fc-430607636883-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.974284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqn6s\" (UniqueName: \"kubernetes.io/projected/7be00819-ddfd-47d6-a7fc-430607636883-kube-api-access-gqn6s\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.974319 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7be00819-ddfd-47d6-a7fc-430607636883-kolla-config\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.974338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7be00819-ddfd-47d6-a7fc-430607636883-config-data\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.977005 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7be00819-ddfd-47d6-a7fc-430607636883-config-data\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.984511 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be00819-ddfd-47d6-a7fc-430607636883-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:05 crc kubenswrapper[4721]: I0128 18:53:05.985014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be00819-ddfd-47d6-a7fc-430607636883-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:06 crc kubenswrapper[4721]: I0128 18:53:06.005566 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqn6s\" (UniqueName: \"kubernetes.io/projected/7be00819-ddfd-47d6-a7fc-430607636883-kube-api-access-gqn6s\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:06 crc kubenswrapper[4721]: I0128 18:53:06.087221 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7be00819-ddfd-47d6-a7fc-430607636883-kolla-config\") pod \"memcached-0\" (UID: \"7be00819-ddfd-47d6-a7fc-430607636883\") " pod="openstack/memcached-0" Jan 28 18:53:06 crc kubenswrapper[4721]: I0128 18:53:06.131679 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 18:53:06 crc kubenswrapper[4721]: I0128 18:53:06.380979 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.501840 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.503860 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.507790 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-skdwh" Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.516811 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.619819 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrqv\" (UniqueName: \"kubernetes.io/projected/5e16ae9a-515f-4c11-a048-84aedad18b0a-kube-api-access-mtrqv\") pod \"kube-state-metrics-0\" (UID: \"5e16ae9a-515f-4c11-a048-84aedad18b0a\") " pod="openstack/kube-state-metrics-0" Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.722343 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrqv\" (UniqueName: \"kubernetes.io/projected/5e16ae9a-515f-4c11-a048-84aedad18b0a-kube-api-access-mtrqv\") pod \"kube-state-metrics-0\" (UID: \"5e16ae9a-515f-4c11-a048-84aedad18b0a\") " pod="openstack/kube-state-metrics-0" Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.748126 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrqv\" (UniqueName: \"kubernetes.io/projected/5e16ae9a-515f-4c11-a048-84aedad18b0a-kube-api-access-mtrqv\") pod \"kube-state-metrics-0\" (UID: \"5e16ae9a-515f-4c11-a048-84aedad18b0a\") " pod="openstack/kube-state-metrics-0" Jan 28 18:53:07 crc kubenswrapper[4721]: I0128 18:53:07.859988 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.453919 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.457404 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.461377 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.461554 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.461723 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.461828 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.462131 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-j5gnb" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.480563 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536250 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/95a1b67a-adb0-42f1-9fb8-32b01c443ede-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536388 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52l4n\" (UniqueName: \"kubernetes.io/projected/95a1b67a-adb0-42f1-9fb8-32b01c443ede-kube-api-access-52l4n\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536441 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536463 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536502 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/95a1b67a-adb0-42f1-9fb8-32b01c443ede-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536537 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.536568 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/95a1b67a-adb0-42f1-9fb8-32b01c443ede-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640080 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52l4n\" (UniqueName: \"kubernetes.io/projected/95a1b67a-adb0-42f1-9fb8-32b01c443ede-kube-api-access-52l4n\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640156 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640183 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640227 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/95a1b67a-adb0-42f1-9fb8-32b01c443ede-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640252 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640279 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/95a1b67a-adb0-42f1-9fb8-32b01c443ede-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.640329 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/95a1b67a-adb0-42f1-9fb8-32b01c443ede-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.648655 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.654532 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.659833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/95a1b67a-adb0-42f1-9fb8-32b01c443ede-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.660583 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/95a1b67a-adb0-42f1-9fb8-32b01c443ede-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.669144 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52l4n\" (UniqueName: \"kubernetes.io/projected/95a1b67a-adb0-42f1-9fb8-32b01c443ede-kube-api-access-52l4n\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.669164 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/95a1b67a-adb0-42f1-9fb8-32b01c443ede-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.677792 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/95a1b67a-adb0-42f1-9fb8-32b01c443ede-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"95a1b67a-adb0-42f1-9fb8-32b01c443ede\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.789622 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.906722 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.908859 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.914670 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.914682 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.914895 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.914904 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.914938 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zmptf" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.914898 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.915087 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.915318 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 18:53:08 crc kubenswrapper[4721]: I0128 18:53:08.935182 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046642 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046711 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046752 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046789 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046837 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046916 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z979p\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-kube-api-access-z979p\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046938 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.046962 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.047006 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.148689 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.148763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.148813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.148869 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.149145 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.149217 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.149283 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.149340 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z979p\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-kube-api-access-z979p\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.149429 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.149471 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.151444 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.152066 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.153019 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.153475 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.156723 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.157971 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.158646 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.158812 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d38c0b121d5295a147080ad18debad98481eaf07feef18cd6048e41a66022495/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.162350 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.181041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z979p\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-kube-api-access-z979p\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.185633 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.227095 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:09 crc kubenswrapper[4721]: I0128 18:53:09.245505 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.355981 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sbclw"] Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.358180 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.365917 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sbclw"] Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.366330 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l9z5n" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.366520 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.366644 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.428653 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-djsj9"] Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.430598 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.436931 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-djsj9"] Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.517933 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-ovn-controller-tls-certs\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518000 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hsz4\" (UniqueName: \"kubernetes.io/projected/88eb1b46-3d78-4f1f-b822-aa8562237980-kube-api-access-4hsz4\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518033 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-log-ovn\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518057 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-scripts\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518105 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-lib\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518153 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-log\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518177 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-etc-ovs\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518230 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88eb1b46-3d78-4f1f-b822-aa8562237980-scripts\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518366 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-run-ovn\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518404 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-combined-ca-bundle\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518501 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-run\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518593 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-run\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.518632 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66wt\" (UniqueName: \"kubernetes.io/projected/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-kube-api-access-f66wt\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620746 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-run\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620812 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66wt\" (UniqueName: \"kubernetes.io/projected/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-kube-api-access-f66wt\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620858 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-ovn-controller-tls-certs\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620887 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hsz4\" (UniqueName: \"kubernetes.io/projected/88eb1b46-3d78-4f1f-b822-aa8562237980-kube-api-access-4hsz4\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620916 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-log-ovn\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620939 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-scripts\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.620977 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-lib\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621018 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-log\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621046 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-etc-ovs\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621068 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88eb1b46-3d78-4f1f-b822-aa8562237980-scripts\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621107 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-run-ovn\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-combined-ca-bundle\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621165 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-run\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621652 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-run\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621706 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-log\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621708 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-run-ovn\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621755 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-etc-ovs\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.621772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-log-ovn\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.623645 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-scripts\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.626538 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88eb1b46-3d78-4f1f-b822-aa8562237980-scripts\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.628502 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-var-run\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.628515 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88eb1b46-3d78-4f1f-b822-aa8562237980-var-lib\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.629492 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-ovn-controller-tls-certs\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.641022 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66wt\" (UniqueName: \"kubernetes.io/projected/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-kube-api-access-f66wt\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.641761 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hsz4\" (UniqueName: \"kubernetes.io/projected/88eb1b46-3d78-4f1f-b822-aa8562237980-kube-api-access-4hsz4\") pod \"ovn-controller-ovs-djsj9\" (UID: \"88eb1b46-3d78-4f1f-b822-aa8562237980\") " pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.655039 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c391bae1-d3a9-4ccd-a868-d7263d9b0a28-combined-ca-bundle\") pod \"ovn-controller-sbclw\" (UID: \"c391bae1-d3a9-4ccd-a868-d7263d9b0a28\") " pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.681205 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sbclw" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.762539 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.819006 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.820647 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.825619 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.825776 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.826691 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.828342 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vznhx" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.828376 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.836887 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.925936 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e58913-334f-484a-8e7d-e1ac86753dbe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926040 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4e58913-334f-484a-8e7d-e1ac86753dbe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926066 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926089 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58913-334f-484a-8e7d-e1ac86753dbe-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926111 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926169 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926309 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:11 crc kubenswrapper[4721]: I0128 18:53:11.926331 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svktz\" (UniqueName: \"kubernetes.io/projected/f4e58913-334f-484a-8e7d-e1ac86753dbe-kube-api-access-svktz\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028043 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e58913-334f-484a-8e7d-e1ac86753dbe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028430 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4e58913-334f-484a-8e7d-e1ac86753dbe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028561 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028679 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58913-334f-484a-8e7d-e1ac86753dbe-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028766 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028833 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4e58913-334f-484a-8e7d-e1ac86753dbe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.028925 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.029125 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.029268 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svktz\" (UniqueName: \"kubernetes.io/projected/f4e58913-334f-484a-8e7d-e1ac86753dbe-kube-api-access-svktz\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.029901 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e58913-334f-484a-8e7d-e1ac86753dbe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.030235 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58913-334f-484a-8e7d-e1ac86753dbe-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.096721 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.096785 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af47ac84400329f57a6d8b610b3e705ad69dcfbc4cbcee69881a6fc5559ebe28/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.096999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.097780 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.098960 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e58913-334f-484a-8e7d-e1ac86753dbe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.103382 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svktz\" (UniqueName: \"kubernetes.io/projected/f4e58913-334f-484a-8e7d-e1ac86753dbe-kube-api-access-svktz\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.156327 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01c692af-a768-4410-a1c2-8fd5cbd81bd9\") pod \"ovsdbserver-nb-0\" (UID: \"f4e58913-334f-484a-8e7d-e1ac86753dbe\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:12 crc kubenswrapper[4721]: I0128 18:53:12.450307 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.720492 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.722765 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.728074 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.728466 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.728507 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.735982 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d4vs2" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.737340 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.825818 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xc2q\" (UniqueName: \"kubernetes.io/projected/284cf569-7d31-465c-9189-05f80f168989-kube-api-access-8xc2q\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.825875 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/284cf569-7d31-465c-9189-05f80f168989-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.825933 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.825957 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/284cf569-7d31-465c-9189-05f80f168989-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.825985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.826005 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.826020 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cf569-7d31-465c-9189-05f80f168989-config\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.826224 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927781 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xc2q\" (UniqueName: \"kubernetes.io/projected/284cf569-7d31-465c-9189-05f80f168989-kube-api-access-8xc2q\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927838 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/284cf569-7d31-465c-9189-05f80f168989-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927893 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927912 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/284cf569-7d31-465c-9189-05f80f168989-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927943 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927964 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927978 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cf569-7d31-465c-9189-05f80f168989-config\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.927998 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.928450 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/284cf569-7d31-465c-9189-05f80f168989-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.929048 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cf569-7d31-465c-9189-05f80f168989-config\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.929094 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/284cf569-7d31-465c-9189-05f80f168989-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.934453 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.934503 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b29346ba4700f8b5fb72a035bfaf924064171624b4b5ecf2845c33a6e4072e40/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.951274 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.951352 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xc2q\" (UniqueName: \"kubernetes.io/projected/284cf569-7d31-465c-9189-05f80f168989-kube-api-access-8xc2q\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.952216 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.952665 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/284cf569-7d31-465c-9189-05f80f168989-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.995865 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc"] Jan 28 18:53:16 crc kubenswrapper[4721]: I0128 18:53:16.997333 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.001284 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.001803 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-nsprm" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.004689 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.005636 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.006223 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.016530 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.037409 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d89cf2a-dfec-4f72-a5e8-5533f6da91f3\") pod \"ovsdbserver-sb-0\" (UID: \"284cf569-7d31-465c-9189-05f80f168989\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.044045 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.131766 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.131832 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600f989b-3ac6-4fe8-9848-6b80319e8c66-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.132128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.132300 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.132407 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5hsq\" (UniqueName: \"kubernetes.io/projected/600f989b-3ac6-4fe8-9848-6b80319e8c66-kube-api-access-n5hsq\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.204529 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.205962 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.209624 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.209728 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.210016 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.231047 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.234161 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.234312 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.234449 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5hsq\" (UniqueName: \"kubernetes.io/projected/600f989b-3ac6-4fe8-9848-6b80319e8c66-kube-api-access-n5hsq\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.234535 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.234564 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600f989b-3ac6-4fe8-9848-6b80319e8c66-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.235551 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600f989b-3ac6-4fe8-9848-6b80319e8c66-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.236999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.241910 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.241945 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/600f989b-3ac6-4fe8-9848-6b80319e8c66-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.275214 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5hsq\" (UniqueName: \"kubernetes.io/projected/600f989b-3ac6-4fe8-9848-6b80319e8c66-kube-api-access-n5hsq\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-gzhlc\" (UID: \"600f989b-3ac6-4fe8-9848-6b80319e8c66\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.335917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.335972 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.335993 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.336069 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.336097 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.336156 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mqz4\" (UniqueName: \"kubernetes.io/projected/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-kube-api-access-8mqz4\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.356805 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.362027 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.369775 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.377926 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.380271 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.383527 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437577 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437633 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437661 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be2127c-76cf-41fb-99d2-28a4e10a2b03-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437703 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437758 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mqz4\" (UniqueName: \"kubernetes.io/projected/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-kube-api-access-8mqz4\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437830 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlhg\" (UniqueName: \"kubernetes.io/projected/6be2127c-76cf-41fb-99d2-28a4e10a2b03-kube-api-access-njlhg\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437853 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437877 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437897 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437939 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.437973 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.438953 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.439361 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.441674 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.441826 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.442398 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.464890 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mqz4\" (UniqueName: \"kubernetes.io/projected/cd76eab6-6d1b-4d6b-9c42-3e667e081ce6-kube-api-access-8mqz4\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-4gfwq\" (UID: \"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.523492 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.539133 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njlhg\" (UniqueName: \"kubernetes.io/projected/6be2127c-76cf-41fb-99d2-28a4e10a2b03-kube-api-access-njlhg\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.539225 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.539265 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.539310 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be2127c-76cf-41fb-99d2-28a4e10a2b03-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.539348 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.540287 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.541385 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be2127c-76cf-41fb-99d2-28a4e10a2b03-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.547919 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.552315 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.553771 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.557285 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6be2127c-76cf-41fb-99d2-28a4e10a2b03-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.558052 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-gvpr6" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.558687 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.558969 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.559127 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.559259 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.559365 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.559472 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.565751 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njlhg\" (UniqueName: \"kubernetes.io/projected/6be2127c-76cf-41fb-99d2-28a4e10a2b03-kube-api-access-njlhg\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cd79j\" (UID: \"6be2127c-76cf-41fb-99d2-28a4e10a2b03\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.571199 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.573465 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.594350 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.629732 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984"] Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.640906 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bhp\" (UniqueName: \"kubernetes.io/projected/dffa61ba-c98d-446a-a4d0-34e1e15a093b-kube-api-access-92bhp\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.641035 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.641181 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.641270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.641312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.641346 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.641458 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.642756 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.643305 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.643387 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.643426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.643757 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.643910 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.644262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.644466 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.644523 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptnb\" (UniqueName: \"kubernetes.io/projected/ded95a77-cbf2-4db7-b6b4-56fdf518717c-kube-api-access-4ptnb\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.644576 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.644603 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.689933 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.750840 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754292 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754376 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754403 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptnb\" (UniqueName: \"kubernetes.io/projected/ded95a77-cbf2-4db7-b6b4-56fdf518717c-kube-api-access-4ptnb\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754443 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754464 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754497 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bhp\" (UniqueName: \"kubernetes.io/projected/dffa61ba-c98d-446a-a4d0-34e1e15a093b-kube-api-access-92bhp\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754518 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754542 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754571 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754595 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754657 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754714 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754744 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754781 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754816 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.754886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: E0128 18:53:17.754948 4721 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Jan 28 18:53:17 crc kubenswrapper[4721]: E0128 18:53:17.755028 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tls-secret podName:ded95a77-cbf2-4db7-b6b4-56fdf518717c nodeName:}" failed. No retries permitted until 2026-01-28 18:53:18.255004007 +0000 UTC m=+1163.980309637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tls-secret") pod "cloudkitty-lokistack-gateway-7db4f4db8c-t9249" (UID: "ded95a77-cbf2-4db7-b6b4-56fdf518717c") : secret "cloudkitty-lokistack-gateway-http" not found Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.755868 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.756081 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.756140 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.756642 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: E0128 18:53:17.756699 4721 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Jan 28 18:53:17 crc kubenswrapper[4721]: E0128 18:53:17.756744 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tls-secret podName:dffa61ba-c98d-446a-a4d0-34e1e15a093b nodeName:}" failed. No retries permitted until 2026-01-28 18:53:18.256730371 +0000 UTC m=+1163.982036011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tls-secret") pod "cloudkitty-lokistack-gateway-7db4f4db8c-b6984" (UID: "dffa61ba-c98d-446a-a4d0-34e1e15a093b") : secret "cloudkitty-lokistack-gateway-http" not found Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.757355 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.757434 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.757614 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.757970 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.758556 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.759039 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.761190 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.769684 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.771213 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.772841 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bhp\" (UniqueName: \"kubernetes.io/projected/dffa61ba-c98d-446a-a4d0-34e1e15a093b-kube-api-access-92bhp\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.786005 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptnb\" (UniqueName: \"kubernetes.io/projected/ded95a77-cbf2-4db7-b6b4-56fdf518717c-kube-api-access-4ptnb\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:17 crc kubenswrapper[4721]: I0128 18:53:17.806046 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffa61ba-c98d-446a-a4d0-34e1e15a093b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.264270 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.264358 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.272707 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ded95a77-cbf2-4db7-b6b4-56fdf518717c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-t9249\" (UID: \"ded95a77-cbf2-4db7-b6b4-56fdf518717c\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.273235 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dffa61ba-c98d-446a-a4d0-34e1e15a093b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-b6984\" (UID: \"dffa61ba-c98d-446a-a4d0-34e1e15a093b\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.482647 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.525095 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.541477 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"00b26873-8c7a-4ea7-b334-873b01cc5d84","Type":"ContainerStarted","Data":"075ac8a084f2c128ecc3df2b8ed1b5d34d1eade503a31a12fa312b592c9376fa"} Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.548403 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.549831 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.554560 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.554604 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.558041 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.559266 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.560721 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.561930 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.566755 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.572975 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.616417 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.616708 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.642704 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.655696 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.664500 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.683940 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684056 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684136 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684218 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684245 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742e65f6-66eb-4334-9328-b77d47d420d0-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684301 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684349 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lksbz\" (UniqueName: \"kubernetes.io/projected/22863ebc-7f06-4697-a494-1e854030c803-kube-api-access-lksbz\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684483 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684702 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684794 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gclp\" (UniqueName: \"kubernetes.io/projected/742e65f6-66eb-4334-9328-b77d47d420d0-kube-api-access-9gclp\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684847 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684881 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.684945 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqcq\" (UniqueName: \"kubernetes.io/projected/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-kube-api-access-ncqcq\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.686088 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.686193 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.686312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.686440 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.686507 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.687714 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.687854 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22863ebc-7f06-4697-a494-1e854030c803-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.687897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.790656 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791074 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791107 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791150 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791204 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742e65f6-66eb-4334-9328-b77d47d420d0-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791235 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791258 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lksbz\" (UniqueName: \"kubernetes.io/projected/22863ebc-7f06-4697-a494-1e854030c803-kube-api-access-lksbz\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791291 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791359 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791401 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gclp\" (UniqueName: \"kubernetes.io/projected/742e65f6-66eb-4334-9328-b77d47d420d0-kube-api-access-9gclp\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791431 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791470 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791550 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqcq\" (UniqueName: \"kubernetes.io/projected/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-kube-api-access-ncqcq\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791580 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791661 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791708 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791764 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791797 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22863ebc-7f06-4697-a494-1e854030c803-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791837 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791867 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.791897 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.792278 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.792484 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742e65f6-66eb-4334-9328-b77d47d420d0-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.794126 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.795561 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.795793 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.797489 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.798320 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.798333 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22863ebc-7f06-4697-a494-1e854030c803-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.798591 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.798697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.798879 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.799701 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.801794 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.802047 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.807571 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/22863ebc-7f06-4697-a494-1e854030c803-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.808898 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.809584 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.811213 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.812744 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/742e65f6-66eb-4334-9328-b77d47d420d0-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.815465 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqcq\" (UniqueName: \"kubernetes.io/projected/e06ee4ac-7688-41ae-b0f0-13e7cfc042e7-kube-api-access-ncqcq\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.818051 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gclp\" (UniqueName: \"kubernetes.io/projected/742e65f6-66eb-4334-9328-b77d47d420d0-kube-api-access-9gclp\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.820476 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lksbz\" (UniqueName: \"kubernetes.io/projected/22863ebc-7f06-4697-a494-1e854030c803-kube-api-access-lksbz\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.834598 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.835979 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.840345 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"742e65f6-66eb-4334-9328-b77d47d420d0\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.852884 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"22863ebc-7f06-4697-a494-1e854030c803\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.878035 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.922739 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:18 crc kubenswrapper[4721]: I0128 18:53:18.937526 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:35 crc kubenswrapper[4721]: E0128 18:53:35.834939 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 18:53:35 crc kubenswrapper[4721]: E0128 18:53:35.836437 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsv5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(dc56a986-671d-4f17-8386-939d0fd9394a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:53:35 crc kubenswrapper[4721]: E0128 18:53:35.838146 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" Jan 28 18:53:35 crc kubenswrapper[4721]: E0128 18:53:35.879462 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 28 18:53:35 crc kubenswrapper[4721]: E0128 18:53:35.879646 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dk8vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(ec1e1de9-b144-4c34-bb14-4c0382670f45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:53:35 crc kubenswrapper[4721]: E0128 18:53:35.880806 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.661618 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.661804 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md2vn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-qpxgp_openstack(40509d95-6418-4f4c-96a3-374874891872): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.663408 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" podUID="40509d95-6418-4f4c-96a3-374874891872" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.676482 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.676702 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfjld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-vc8rk_openstack(aecb4886-3e12-46f5-b2dd-20260e64e4c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.682538 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.695547 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.695801 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.731831 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.733099 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl78z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-59s7w_openstack(dafbdcb9-9fbe-40c2-920d-6111bf0e2d88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.734816 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.744337 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.744666 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvdjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-nzfk5_openstack(b1c7fb27-5095-4102-89b3-5b2e10ff6347): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:53:36 crc kubenswrapper[4721]: E0128 18:53:36.756379 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" podUID="b1c7fb27-5095-4102-89b3-5b2e10ff6347" Jan 28 18:53:37 crc kubenswrapper[4721]: E0128 18:53:37.703750 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" Jan 28 18:53:37 crc kubenswrapper[4721]: E0128 18:53:37.703847 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.203197 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.208276 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.245953 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md2vn\" (UniqueName: \"kubernetes.io/projected/40509d95-6418-4f4c-96a3-374874891872-kube-api-access-md2vn\") pod \"40509d95-6418-4f4c-96a3-374874891872\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.246046 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-config\") pod \"40509d95-6418-4f4c-96a3-374874891872\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.246283 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c7fb27-5095-4102-89b3-5b2e10ff6347-config\") pod \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.246317 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvdjd\" (UniqueName: \"kubernetes.io/projected/b1c7fb27-5095-4102-89b3-5b2e10ff6347-kube-api-access-fvdjd\") pod \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\" (UID: \"b1c7fb27-5095-4102-89b3-5b2e10ff6347\") " Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.246410 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-dns-svc\") pod \"40509d95-6418-4f4c-96a3-374874891872\" (UID: \"40509d95-6418-4f4c-96a3-374874891872\") " Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.247723 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-config" (OuterVolumeSpecName: "config") pod "40509d95-6418-4f4c-96a3-374874891872" (UID: "40509d95-6418-4f4c-96a3-374874891872"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.248239 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c7fb27-5095-4102-89b3-5b2e10ff6347-config" (OuterVolumeSpecName: "config") pod "b1c7fb27-5095-4102-89b3-5b2e10ff6347" (UID: "b1c7fb27-5095-4102-89b3-5b2e10ff6347"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.249259 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40509d95-6418-4f4c-96a3-374874891872" (UID: "40509d95-6418-4f4c-96a3-374874891872"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.253916 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c7fb27-5095-4102-89b3-5b2e10ff6347-kube-api-access-fvdjd" (OuterVolumeSpecName: "kube-api-access-fvdjd") pod "b1c7fb27-5095-4102-89b3-5b2e10ff6347" (UID: "b1c7fb27-5095-4102-89b3-5b2e10ff6347"). InnerVolumeSpecName "kube-api-access-fvdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.254940 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40509d95-6418-4f4c-96a3-374874891872-kube-api-access-md2vn" (OuterVolumeSpecName: "kube-api-access-md2vn") pod "40509d95-6418-4f4c-96a3-374874891872" (UID: "40509d95-6418-4f4c-96a3-374874891872"). InnerVolumeSpecName "kube-api-access-md2vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.350780 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md2vn\" (UniqueName: \"kubernetes.io/projected/40509d95-6418-4f4c-96a3-374874891872-kube-api-access-md2vn\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.351256 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.351268 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c7fb27-5095-4102-89b3-5b2e10ff6347-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.351278 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvdjd\" (UniqueName: \"kubernetes.io/projected/b1c7fb27-5095-4102-89b3-5b2e10ff6347-kube-api-access-fvdjd\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.351292 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40509d95-6418-4f4c-96a3-374874891872-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.743854 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" event={"ID":"40509d95-6418-4f4c-96a3-374874891872","Type":"ContainerDied","Data":"167c4f293fb360c21132188a3e3712790bc7c03d0372ff782ae83032846b64b7"} Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.743938 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qpxgp" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.763029 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" event={"ID":"b1c7fb27-5095-4102-89b3-5b2e10ff6347","Type":"ContainerDied","Data":"37447b1962108057963a9a298b62fbcb5aa50662fc6410b7a4882cb8c516bc32"} Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.763161 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nzfk5" Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.786902 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"00b26873-8c7a-4ea7-b334-873b01cc5d84","Type":"ContainerStarted","Data":"1541ba11d3bfe84816ca653d1b0c0a8e26d2032af5a58984df537688c7dac5ba"} Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.839951 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0e740af0-cd0c-4f3e-8be1-facce1656583","Type":"ContainerStarted","Data":"f4bbb1262835c4842d34bcd79fe77e09b28dc0372eb9af51c12abbb46b0aa444"} Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.891433 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qpxgp"] Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.899764 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qpxgp"] Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.918317 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nzfk5"] Jan 28 18:53:39 crc kubenswrapper[4721]: I0128 18:53:39.920507 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nzfk5"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.580751 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.598532 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249"] Jan 28 18:53:40 crc kubenswrapper[4721]: W0128 18:53:40.635016 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded95a77_cbf2_4db7_b6b4_56fdf518717c.slice/crio-f2862f3fee80360c1e8a6c188b1cd2bf852ba221349bb3cef33ea903ed0b248d WatchSource:0}: Error finding container f2862f3fee80360c1e8a6c188b1cd2bf852ba221349bb3cef33ea903ed0b248d: Status 404 returned error can't find the container with id f2862f3fee80360c1e8a6c188b1cd2bf852ba221349bb3cef33ea903ed0b248d Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.644284 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sbclw"] Jan 28 18:53:40 crc kubenswrapper[4721]: W0128 18:53:40.653488 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd76eab6_6d1b_4d6b_9c42_3e667e081ce6.slice/crio-9c511b73ab5c8010760f117169d3ff69798186d887a4d7500e32eaa69835be95 WatchSource:0}: Error finding container 9c511b73ab5c8010760f117169d3ff69798186d887a4d7500e32eaa69835be95: Status 404 returned error can't find the container with id 9c511b73ab5c8010760f117169d3ff69798186d887a4d7500e32eaa69835be95 Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.678310 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.688358 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.699911 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.713295 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.728632 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: W0128 18:53:40.734946 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod600f989b_3ac6_4fe8_9848_6b80319e8c66.slice/crio-37aee4e0a2df230acf857d5b8332b758ed175b7e0c8b950bf3d8e8e0991447c0 WatchSource:0}: Error finding container 37aee4e0a2df230acf857d5b8332b758ed175b7e0c8b950bf3d8e8e0991447c0: Status 404 returned error can't find the container with id 37aee4e0a2df230acf857d5b8332b758ed175b7e0c8b950bf3d8e8e0991447c0 Jan 28 18:53:40 crc kubenswrapper[4721]: W0128 18:53:40.737041 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be00819_ddfd_47d6_a7fc_430607636883.slice/crio-e89cab0fa991208730a40c66a184d1895930aab21bfea10659395f73e35e2b46 WatchSource:0}: Error finding container e89cab0fa991208730a40c66a184d1895930aab21bfea10659395f73e35e2b46: Status 404 returned error can't find the container with id e89cab0fa991208730a40c66a184d1895930aab21bfea10659395f73e35e2b46 Jan 28 18:53:40 crc kubenswrapper[4721]: W0128 18:53:40.742398 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95a1b67a_adb0_42f1_9fb8_32b01c443ede.slice/crio-33c18ebb0c76bd63d644ed1eb6f9e7436525600ad54d7bd698b5296731e3a298 WatchSource:0}: Error finding container 33c18ebb0c76bd63d644ed1eb6f9e7436525600ad54d7bd698b5296731e3a298: Status 404 returned error can't find the container with id 33c18ebb0c76bd63d644ed1eb6f9e7436525600ad54d7bd698b5296731e3a298 Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.791350 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.796962 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtrqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(5e16ae9a-515f-4c11-a048-84aedad18b0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.798219 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/kube-state-metrics-0" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.820658 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gclp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(742e65f6-66eb-4334-9328-b77d47d420d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.821512 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92bhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7db4f4db8c-b6984_openstack(dffa61ba-c98d-446a-a4d0-34e1e15a093b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.823012 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.823034 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" podUID="dffa61ba-c98d-446a-a4d0-34e1e15a093b" Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.828973 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="742e65f6-66eb-4334-9328-b77d47d420d0" Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.835447 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.842095 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.850726 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.855395 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"22863ebc-7f06-4697-a494-1e854030c803","Type":"ContainerStarted","Data":"30cff5b09c538ada044113f2bec816e280b093b6aa973b9a4b7248844f2539fc"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.857102 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5e16ae9a-515f-4c11-a048-84aedad18b0a","Type":"ContainerStarted","Data":"c71c75f04702394c98f8ebe01f6610d83b3246ad4240b89d15b62af57d867c9b"} Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.858375 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.859348 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" event={"ID":"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6","Type":"ContainerStarted","Data":"9c511b73ab5c8010760f117169d3ff69798186d887a4d7500e32eaa69835be95"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.861941 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"742e65f6-66eb-4334-9328-b77d47d420d0","Type":"ContainerStarted","Data":"4917d7323f950015fa56aaba03419454e9bf0eccdd553c6a2f245d292e3650b6"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.863128 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" event={"ID":"6be2127c-76cf-41fb-99d2-28a4e10a2b03","Type":"ContainerStarted","Data":"87c2fc604bae1f7673984176d1587c267eb98e6484fe2b77d39c076a52da6872"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.864512 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerStarted","Data":"d1c7216022dc45649031a414b476b1f5d1318c7a1ae7fb7a52780ebf8bfb148d"} Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.866433 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="742e65f6-66eb-4334-9328-b77d47d420d0" Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.866714 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" event={"ID":"ded95a77-cbf2-4db7-b6b4-56fdf518717c","Type":"ContainerStarted","Data":"f2862f3fee80360c1e8a6c188b1cd2bf852ba221349bb3cef33ea903ed0b248d"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.869525 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" event={"ID":"dffa61ba-c98d-446a-a4d0-34e1e15a093b","Type":"ContainerStarted","Data":"e7c916a933e2a5c4938d228a6855f9d99f562e3c09053b7573687db64d5541bb"} Jan 28 18:53:40 crc kubenswrapper[4721]: E0128 18:53:40.870814 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" podUID="dffa61ba-c98d-446a-a4d0-34e1e15a093b" Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.870861 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"95a1b67a-adb0-42f1-9fb8-32b01c443ede","Type":"ContainerStarted","Data":"33c18ebb0c76bd63d644ed1eb6f9e7436525600ad54d7bd698b5296731e3a298"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.874237 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" event={"ID":"600f989b-3ac6-4fe8-9848-6b80319e8c66","Type":"ContainerStarted","Data":"37aee4e0a2df230acf857d5b8332b758ed175b7e0c8b950bf3d8e8e0991447c0"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.876615 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sbclw" event={"ID":"c391bae1-d3a9-4ccd-a868-d7263d9b0a28","Type":"ContainerStarted","Data":"dd4d036691d24c67cc257362dcb9395603e8f31d8c3adf1f66eeeb2428612b1f"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.884511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7be00819-ddfd-47d6-a7fc-430607636883","Type":"ContainerStarted","Data":"e89cab0fa991208730a40c66a184d1895930aab21bfea10659395f73e35e2b46"} Jan 28 18:53:40 crc kubenswrapper[4721]: I0128 18:53:40.908789 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7","Type":"ContainerStarted","Data":"c4644a2cff7beb09b17d842ed7a4cf384088d2f271d9ef9101114654040a5f6a"} Jan 28 18:53:41 crc kubenswrapper[4721]: I0128 18:53:41.334139 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:53:41 crc kubenswrapper[4721]: I0128 18:53:41.540834 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40509d95-6418-4f4c-96a3-374874891872" path="/var/lib/kubelet/pods/40509d95-6418-4f4c-96a3-374874891872/volumes" Jan 28 18:53:41 crc kubenswrapper[4721]: I0128 18:53:41.541672 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c7fb27-5095-4102-89b3-5b2e10ff6347" path="/var/lib/kubelet/pods/b1c7fb27-5095-4102-89b3-5b2e10ff6347/volumes" Jan 28 18:53:41 crc kubenswrapper[4721]: I0128 18:53:41.918291 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4e58913-334f-484a-8e7d-e1ac86753dbe","Type":"ContainerStarted","Data":"6a518dfcdc69d207d65b7a5d796b2a09d9a8b4e92c53265e5dc8ad7a8be5cf1c"} Jan 28 18:53:41 crc kubenswrapper[4721]: E0128 18:53:41.920446 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:74d61619b9420655da84bc9939e37f76040b437a70e9c96eeb3267f00dfe88ad\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" podUID="dffa61ba-c98d-446a-a4d0-34e1e15a093b" Jan 28 18:53:41 crc kubenswrapper[4721]: E0128 18:53:41.922280 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" Jan 28 18:53:41 crc kubenswrapper[4721]: E0128 18:53:41.922658 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="742e65f6-66eb-4334-9328-b77d47d420d0" Jan 28 18:53:42 crc kubenswrapper[4721]: I0128 18:53:42.091792 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:53:42 crc kubenswrapper[4721]: I0128 18:53:42.191161 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-djsj9"] Jan 28 18:53:43 crc kubenswrapper[4721]: W0128 18:53:43.009010 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88eb1b46_3d78_4f1f_b822_aa8562237980.slice/crio-9d0f5c1ec3018c9d5ed9703ad29fda92a1842563db04514e8f148201852a80c4 WatchSource:0}: Error finding container 9d0f5c1ec3018c9d5ed9703ad29fda92a1842563db04514e8f148201852a80c4: Status 404 returned error can't find the container with id 9d0f5c1ec3018c9d5ed9703ad29fda92a1842563db04514e8f148201852a80c4 Jan 28 18:53:43 crc kubenswrapper[4721]: W0128 18:53:43.009952 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284cf569_7d31_465c_9189_05f80f168989.slice/crio-d16c799a9dd4c5d5cf30dc74437e94875d9176953f14fc3ed9b87857cdcc15dd WatchSource:0}: Error finding container d16c799a9dd4c5d5cf30dc74437e94875d9176953f14fc3ed9b87857cdcc15dd: Status 404 returned error can't find the container with id d16c799a9dd4c5d5cf30dc74437e94875d9176953f14fc3ed9b87857cdcc15dd Jan 28 18:53:43 crc kubenswrapper[4721]: I0128 18:53:43.938188 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"284cf569-7d31-465c-9189-05f80f168989","Type":"ContainerStarted","Data":"d16c799a9dd4c5d5cf30dc74437e94875d9176953f14fc3ed9b87857cdcc15dd"} Jan 28 18:53:43 crc kubenswrapper[4721]: I0128 18:53:43.940374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-djsj9" event={"ID":"88eb1b46-3d78-4f1f-b822-aa8562237980","Type":"ContainerStarted","Data":"9d0f5c1ec3018c9d5ed9703ad29fda92a1842563db04514e8f148201852a80c4"} Jan 28 18:53:44 crc kubenswrapper[4721]: I0128 18:53:44.951699 4721 generic.go:334] "Generic (PLEG): container finished" podID="0e740af0-cd0c-4f3e-8be1-facce1656583" containerID="f4bbb1262835c4842d34bcd79fe77e09b28dc0372eb9af51c12abbb46b0aa444" exitCode=0 Jan 28 18:53:44 crc kubenswrapper[4721]: I0128 18:53:44.951769 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0e740af0-cd0c-4f3e-8be1-facce1656583","Type":"ContainerDied","Data":"f4bbb1262835c4842d34bcd79fe77e09b28dc0372eb9af51c12abbb46b0aa444"} Jan 28 18:53:45 crc kubenswrapper[4721]: I0128 18:53:45.963621 4721 generic.go:334] "Generic (PLEG): container finished" podID="00b26873-8c7a-4ea7-b334-873b01cc5d84" containerID="1541ba11d3bfe84816ca653d1b0c0a8e26d2032af5a58984df537688c7dac5ba" exitCode=0 Jan 28 18:53:45 crc kubenswrapper[4721]: I0128 18:53:45.963793 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"00b26873-8c7a-4ea7-b334-873b01cc5d84","Type":"ContainerDied","Data":"1541ba11d3bfe84816ca653d1b0c0a8e26d2032af5a58984df537688c7dac5ba"} Jan 28 18:53:48 crc kubenswrapper[4721]: I0128 18:53:48.996989 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"00b26873-8c7a-4ea7-b334-873b01cc5d84","Type":"ContainerStarted","Data":"1d7783eb149777f212b51ef1b74b59914cea799393e9afd613262dda8be2af01"} Jan 28 18:53:49 crc kubenswrapper[4721]: I0128 18:53:49.010673 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0e740af0-cd0c-4f3e-8be1-facce1656583","Type":"ContainerStarted","Data":"e7c90d86c6e90ef45e9f3c35d6d2f26045546c3f3741ae031b0ec43c8f96d13b"} Jan 28 18:53:49 crc kubenswrapper[4721]: I0128 18:53:49.038851 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.580480361 podStartE2EDuration="45.038829028s" podCreationTimestamp="2026-01-28 18:53:04 +0000 UTC" firstStartedPulling="2026-01-28 18:53:17.751242279 +0000 UTC m=+1163.476547839" lastFinishedPulling="2026-01-28 18:53:39.209590946 +0000 UTC m=+1184.934896506" observedRunningTime="2026-01-28 18:53:49.026017857 +0000 UTC m=+1194.751323427" watchObservedRunningTime="2026-01-28 18:53:49.038829028 +0000 UTC m=+1194.764134588" Jan 28 18:53:49 crc kubenswrapper[4721]: I0128 18:53:49.075014 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.967772631999999 podStartE2EDuration="46.074987199s" podCreationTimestamp="2026-01-28 18:53:03 +0000 UTC" firstStartedPulling="2026-01-28 18:53:04.964834384 +0000 UTC m=+1150.690139944" lastFinishedPulling="2026-01-28 18:53:39.072048951 +0000 UTC m=+1184.797354511" observedRunningTime="2026-01-28 18:53:49.058239035 +0000 UTC m=+1194.783544605" watchObservedRunningTime="2026-01-28 18:53:49.074987199 +0000 UTC m=+1194.800292789" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.020147 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" event={"ID":"cd76eab6-6d1b-4d6b-9c42-3e667e081ce6","Type":"ContainerStarted","Data":"a25b4f70454c64e25070216a8ed58771e5d9d57e29fdaac9b0896f010a92085f"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.020564 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.021538 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-djsj9" event={"ID":"88eb1b46-3d78-4f1f-b822-aa8562237980","Type":"ContainerStarted","Data":"e0c97395372dbe4a29019ebdaa31d28040d53ad11b622c546159798d69d06c42"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.023160 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" event={"ID":"600f989b-3ac6-4fe8-9848-6b80319e8c66","Type":"ContainerStarted","Data":"f19c86325ff2ba1d9c06f183cb115e7622223c35aa16e018299b943e1884c441"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.023258 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.024803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7be00819-ddfd-47d6-a7fc-430607636883","Type":"ContainerStarted","Data":"ecd6606eb4e0e392cd6116d15303fa70a7654ec7af2969f60efaf82ad3eb4c74"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.024955 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.025921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"e06ee4ac-7688-41ae-b0f0-13e7cfc042e7","Type":"ContainerStarted","Data":"08979ea5583902c062603544e1b0cacca8b726c54e098516c70f1f8e3e9c1c3e"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.026082 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.027293 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" event={"ID":"ded95a77-cbf2-4db7-b6b4-56fdf518717c","Type":"ContainerStarted","Data":"574f065b43db01edd85c01e880563537c2692a3c63eb3974edee3ef1cd7b8bab"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.027575 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.029418 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"22863ebc-7f06-4697-a494-1e854030c803","Type":"ContainerStarted","Data":"ab213caa8c58c24d274ca5788d6db2acb429feb26f254027f57ef157ccf699ea"} Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.029844 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.041732 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.045746 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" podStartSLOduration=25.110617342 podStartE2EDuration="33.045725741s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.672336047 +0000 UTC m=+1186.397641607" lastFinishedPulling="2026-01-28 18:53:48.607444446 +0000 UTC m=+1194.332750006" observedRunningTime="2026-01-28 18:53:50.038352711 +0000 UTC m=+1195.763658271" watchObservedRunningTime="2026-01-28 18:53:50.045725741 +0000 UTC m=+1195.771031301" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.064685 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" podStartSLOduration=27.087473234 podStartE2EDuration="34.064659884s" podCreationTimestamp="2026-01-28 18:53:16 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.740959524 +0000 UTC m=+1186.466265084" lastFinishedPulling="2026-01-28 18:53:47.718146174 +0000 UTC m=+1193.443451734" observedRunningTime="2026-01-28 18:53:50.054495416 +0000 UTC m=+1195.779800986" watchObservedRunningTime="2026-01-28 18:53:50.064659884 +0000 UTC m=+1195.789965444" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.080268 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=25.242708535 podStartE2EDuration="33.080245992s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.764383707 +0000 UTC m=+1186.489689257" lastFinishedPulling="2026-01-28 18:53:48.601921154 +0000 UTC m=+1194.327226714" observedRunningTime="2026-01-28 18:53:50.074644206 +0000 UTC m=+1195.799949776" watchObservedRunningTime="2026-01-28 18:53:50.080245992 +0000 UTC m=+1195.805551552" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.096524 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=25.239136914 podStartE2EDuration="33.09649995s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.783219707 +0000 UTC m=+1186.508525267" lastFinishedPulling="2026-01-28 18:53:48.640582743 +0000 UTC m=+1194.365888303" observedRunningTime="2026-01-28 18:53:50.094390965 +0000 UTC m=+1195.819696525" watchObservedRunningTime="2026-01-28 18:53:50.09649995 +0000 UTC m=+1195.821805510" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.126106 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=38.466309861 podStartE2EDuration="45.126083407s" podCreationTimestamp="2026-01-28 18:53:05 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.74085168 +0000 UTC m=+1186.466157240" lastFinishedPulling="2026-01-28 18:53:47.400625226 +0000 UTC m=+1193.125930786" observedRunningTime="2026-01-28 18:53:50.118622223 +0000 UTC m=+1195.843927793" watchObservedRunningTime="2026-01-28 18:53:50.126083407 +0000 UTC m=+1195.851388967" Jan 28 18:53:50 crc kubenswrapper[4721]: I0128 18:53:50.194782 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-t9249" podStartSLOduration=26.796128744 podStartE2EDuration="33.194759446s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.643751942 +0000 UTC m=+1186.369057502" lastFinishedPulling="2026-01-28 18:53:47.042382644 +0000 UTC m=+1192.767688204" observedRunningTime="2026-01-28 18:53:50.187900662 +0000 UTC m=+1195.913206222" watchObservedRunningTime="2026-01-28 18:53:50.194759446 +0000 UTC m=+1195.920065006" Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.040590 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4e58913-334f-484a-8e7d-e1ac86753dbe","Type":"ContainerStarted","Data":"89b115e41161fd171f7a2e0a992fb3d28beeb9a056dc4b0723aa7fb70b87e665"} Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.042452 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" event={"ID":"6be2127c-76cf-41fb-99d2-28a4e10a2b03","Type":"ContainerStarted","Data":"2ae231f168dc80be1fd55dc8ed60d013eea7e8067cf43c9f0e6d8c24ba334e83"} Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.042552 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.045085 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"284cf569-7d31-465c-9189-05f80f168989","Type":"ContainerStarted","Data":"1950aa8c9c6982aa1cf67aabf117d81a5c4aec360827b1a6236db08323351861"} Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.049416 4721 generic.go:334] "Generic (PLEG): container finished" podID="88eb1b46-3d78-4f1f-b822-aa8562237980" containerID="e0c97395372dbe4a29019ebdaa31d28040d53ad11b622c546159798d69d06c42" exitCode=0 Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.049511 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-djsj9" event={"ID":"88eb1b46-3d78-4f1f-b822-aa8562237980","Type":"ContainerDied","Data":"e0c97395372dbe4a29019ebdaa31d28040d53ad11b622c546159798d69d06c42"} Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.052124 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sbclw" event={"ID":"c391bae1-d3a9-4ccd-a868-d7263d9b0a28","Type":"ContainerStarted","Data":"5345e7113b78be5a34441d5c84b449fa652a9c62a09ff0094cceb43942b32f46"} Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.052530 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sbclw" Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.066493 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" podStartSLOduration=26.107645757 podStartE2EDuration="34.066471269s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.681623587 +0000 UTC m=+1186.406929147" lastFinishedPulling="2026-01-28 18:53:48.640449099 +0000 UTC m=+1194.365754659" observedRunningTime="2026-01-28 18:53:51.05789507 +0000 UTC m=+1196.783200640" watchObservedRunningTime="2026-01-28 18:53:51.066471269 +0000 UTC m=+1196.791776829" Jan 28 18:53:51 crc kubenswrapper[4721]: I0128 18:53:51.086471 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sbclw" podStartSLOduration=32.116460762 podStartE2EDuration="40.086451734s" podCreationTimestamp="2026-01-28 18:53:11 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.637448934 +0000 UTC m=+1186.362754494" lastFinishedPulling="2026-01-28 18:53:48.607439906 +0000 UTC m=+1194.332745466" observedRunningTime="2026-01-28 18:53:51.08377472 +0000 UTC m=+1196.809080310" watchObservedRunningTime="2026-01-28 18:53:51.086451734 +0000 UTC m=+1196.811757294" Jan 28 18:53:53 crc kubenswrapper[4721]: I0128 18:53:53.086253 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"95a1b67a-adb0-42f1-9fb8-32b01c443ede","Type":"ContainerStarted","Data":"148a777a9577ad48cb4694a4b71fbafbdb89f313a36e6a9a6ff343e1d0a369f1"} Jan 28 18:53:53 crc kubenswrapper[4721]: I0128 18:53:53.091929 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerStarted","Data":"e1472b3e544be64b3e29964dc712e9e0c6c5bd0aeed58e5b9bad95265232217c"} Jan 28 18:53:53 crc kubenswrapper[4721]: I0128 18:53:53.095496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc56a986-671d-4f17-8386-939d0fd9394a","Type":"ContainerStarted","Data":"f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1"} Jan 28 18:53:53 crc kubenswrapper[4721]: I0128 18:53:53.098453 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-djsj9" event={"ID":"88eb1b46-3d78-4f1f-b822-aa8562237980","Type":"ContainerStarted","Data":"8838b6289c86dc56e2eb455d502d2f3a242ae9709573b87a2d20fad3ad1e9cc9"} Jan 28 18:53:53 crc kubenswrapper[4721]: I0128 18:53:53.100354 4721 generic.go:334] "Generic (PLEG): container finished" podID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerID="1d538e393ca91bbeb837e75b6debff2f56a274b53b10f9112872486504abcbb6" exitCode=0 Jan 28 18:53:53 crc kubenswrapper[4721]: I0128 18:53:53.100512 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" event={"ID":"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88","Type":"ContainerDied","Data":"1d538e393ca91bbeb837e75b6debff2f56a274b53b10f9112872486504abcbb6"} Jan 28 18:53:54 crc kubenswrapper[4721]: I0128 18:53:54.118323 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec1e1de9-b144-4c34-bb14-4c0382670f45","Type":"ContainerStarted","Data":"1744104dd2c6db657749ff29714a2574a58c6368538f7d3e645044ef7a0b215d"} Jan 28 18:53:54 crc kubenswrapper[4721]: I0128 18:53:54.505677 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 18:53:54 crc kubenswrapper[4721]: I0128 18:53:54.505768 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.130139 4721 generic.go:334] "Generic (PLEG): container finished" podID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerID="5e135d6440af40c8c0b7212a6d5dccd74d2442655a3bdd266d811e697bb4d9b1" exitCode=0 Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.130306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" event={"ID":"aecb4886-3e12-46f5-b2dd-20260e64e4c7","Type":"ContainerDied","Data":"5e135d6440af40c8c0b7212a6d5dccd74d2442655a3bdd266d811e697bb4d9b1"} Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.136135 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"284cf569-7d31-465c-9189-05f80f168989","Type":"ContainerStarted","Data":"21f31315ba9fc4a6197114ace3d43acfda1f78b31554aa7b2dc1487be546fce6"} Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.140777 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-djsj9" event={"ID":"88eb1b46-3d78-4f1f-b822-aa8562237980","Type":"ContainerStarted","Data":"bc93348fb262b1e302988e8da7032d83c9449a5d841bf73a4a3e52cf303cbd57"} Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.141213 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.141278 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.145465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" event={"ID":"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88","Type":"ContainerStarted","Data":"1e81e6a440865f58a8a00ad6f945396888eecb4ac069b57fdc0548d00edf0fdf"} Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.146240 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.147921 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"742e65f6-66eb-4334-9328-b77d47d420d0","Type":"ContainerStarted","Data":"d561125daad3f57159c930d24ce679e3f68745cb489285b3b997b681635eb3b1"} Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.148314 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.150282 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4e58913-334f-484a-8e7d-e1ac86753dbe","Type":"ContainerStarted","Data":"16744565829a9bf270f0ec77bbb5a00cbbbb9db4014e8d95dbe554d313c954dd"} Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.202224 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=-9223371998.652576 podStartE2EDuration="38.202199738s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.820491633 +0000 UTC m=+1186.545797193" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:53:55.189609473 +0000 UTC m=+1200.914915053" watchObservedRunningTime="2026-01-28 18:53:55.202199738 +0000 UTC m=+1200.927505298" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.230557 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.581312159 podStartE2EDuration="40.230532483s" podCreationTimestamp="2026-01-28 18:53:15 +0000 UTC" firstStartedPulling="2026-01-28 18:53:43.014041316 +0000 UTC m=+1188.739346876" lastFinishedPulling="2026-01-28 18:53:54.66326164 +0000 UTC m=+1200.388567200" observedRunningTime="2026-01-28 18:53:55.226485967 +0000 UTC m=+1200.951791537" watchObservedRunningTime="2026-01-28 18:53:55.230532483 +0000 UTC m=+1200.955838043" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.286876 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podStartSLOduration=4.822840718 podStartE2EDuration="54.286854556s" podCreationTimestamp="2026-01-28 18:53:01 +0000 UTC" firstStartedPulling="2026-01-28 18:53:02.498591018 +0000 UTC m=+1148.223896578" lastFinishedPulling="2026-01-28 18:53:51.962604856 +0000 UTC m=+1197.687910416" observedRunningTime="2026-01-28 18:53:55.283022157 +0000 UTC m=+1201.008327717" watchObservedRunningTime="2026-01-28 18:53:55.286854556 +0000 UTC m=+1201.012160116" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.290409 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.997829642 podStartE2EDuration="45.290386177s" podCreationTimestamp="2026-01-28 18:53:10 +0000 UTC" firstStartedPulling="2026-01-28 18:53:41.344303318 +0000 UTC m=+1187.069608878" lastFinishedPulling="2026-01-28 18:53:54.636859843 +0000 UTC m=+1200.362165413" observedRunningTime="2026-01-28 18:53:55.259770509 +0000 UTC m=+1200.985076069" watchObservedRunningTime="2026-01-28 18:53:55.290386177 +0000 UTC m=+1201.015691737" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.311185 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-djsj9" podStartSLOduration=38.741332526 podStartE2EDuration="44.311136497s" podCreationTimestamp="2026-01-28 18:53:11 +0000 UTC" firstStartedPulling="2026-01-28 18:53:43.015161482 +0000 UTC m=+1188.740467042" lastFinishedPulling="2026-01-28 18:53:48.584965443 +0000 UTC m=+1194.310271013" observedRunningTime="2026-01-28 18:53:55.303388684 +0000 UTC m=+1201.028694254" watchObservedRunningTime="2026-01-28 18:53:55.311136497 +0000 UTC m=+1201.036442067" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.662448 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:55 crc kubenswrapper[4721]: I0128 18:53:55.662890 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.045711 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.097413 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.133432 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.160143 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" event={"ID":"aecb4886-3e12-46f5-b2dd-20260e64e4c7","Type":"ContainerStarted","Data":"1e3b225685548a877b08564af4b407871263c0c268b69d5776a7e29398768945"} Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.161049 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.196209 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podStartSLOduration=-9223371981.65862 podStartE2EDuration="55.196155106s" podCreationTimestamp="2026-01-28 18:53:01 +0000 UTC" firstStartedPulling="2026-01-28 18:53:02.928304137 +0000 UTC m=+1148.653609697" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:53:56.183929403 +0000 UTC m=+1201.909234963" watchObservedRunningTime="2026-01-28 18:53:56.196155106 +0000 UTC m=+1201.921460666" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.213517 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.573072 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vc8rk"] Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.608514 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-m5tkl"] Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.610366 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.615258 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.643024 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-m5tkl"] Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.753992 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dmttf"] Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.755805 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.759525 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.763297 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-config\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.763409 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.763484 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjw5\" (UniqueName: \"kubernetes.io/projected/b02cc010-e156-405e-aac3-45c2afa254ac-kube-api-access-hvjw5\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.763527 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.826350 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dmttf"] Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866081 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-config\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866201 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bacb5ba4-39a7-4774-818d-67453153a34f-config\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866277 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bacb5ba4-39a7-4774-818d-67453153a34f-ovs-rundir\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866311 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866334 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8744s\" (UniqueName: \"kubernetes.io/projected/bacb5ba4-39a7-4774-818d-67453153a34f-kube-api-access-8744s\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866364 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bacb5ba4-39a7-4774-818d-67453153a34f-ovn-rundir\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866442 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjw5\" (UniqueName: \"kubernetes.io/projected/b02cc010-e156-405e-aac3-45c2afa254ac-kube-api-access-hvjw5\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866486 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866543 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb5ba4-39a7-4774-818d-67453153a34f-combined-ca-bundle\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.866615 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb5ba4-39a7-4774-818d-67453153a34f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.867213 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-config\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.867713 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.867846 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.900594 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjw5\" (UniqueName: \"kubernetes.io/projected/b02cc010-e156-405e-aac3-45c2afa254ac-kube-api-access-hvjw5\") pod \"dnsmasq-dns-7f896c8c65-m5tkl\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.932010 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.968007 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb5ba4-39a7-4774-818d-67453153a34f-combined-ca-bundle\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.968105 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb5ba4-39a7-4774-818d-67453153a34f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.968820 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bacb5ba4-39a7-4774-818d-67453153a34f-config\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.968906 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bacb5ba4-39a7-4774-818d-67453153a34f-ovs-rundir\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.968935 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8744s\" (UniqueName: \"kubernetes.io/projected/bacb5ba4-39a7-4774-818d-67453153a34f-kube-api-access-8744s\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.968959 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bacb5ba4-39a7-4774-818d-67453153a34f-ovn-rundir\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.969488 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bacb5ba4-39a7-4774-818d-67453153a34f-ovn-rundir\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.969554 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bacb5ba4-39a7-4774-818d-67453153a34f-ovs-rundir\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.970478 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bacb5ba4-39a7-4774-818d-67453153a34f-config\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.974825 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb5ba4-39a7-4774-818d-67453153a34f-combined-ca-bundle\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:56 crc kubenswrapper[4721]: I0128 18:53:56.976625 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb5ba4-39a7-4774-818d-67453153a34f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.002955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8744s\" (UniqueName: \"kubernetes.io/projected/bacb5ba4-39a7-4774-818d-67453153a34f-kube-api-access-8744s\") pod \"ovn-controller-metrics-dmttf\" (UID: \"bacb5ba4-39a7-4774-818d-67453153a34f\") " pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.045127 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-59s7w"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.097269 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tsjl9"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.099148 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.103499 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.108055 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tsjl9"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.138390 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dmttf" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.206259 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" containerID="cri-o://1e3b225685548a877b08564af4b407871263c0c268b69d5776a7e29398768945" gracePeriod=10 Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.206589 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" containerID="cri-o://1e81e6a440865f58a8a00ad6f945396888eecb4ac069b57fdc0548d00edf0fdf" gracePeriod=10 Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.209162 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.285742 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-config\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.285899 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.285947 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.286006 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.286112 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd5c7\" (UniqueName: \"kubernetes.io/projected/f0d97192-cb28-436d-adc6-a3aafd8aad46-kube-api-access-fd5c7\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.388082 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-config\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.388291 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.388340 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.388393 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.388537 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd5c7\" (UniqueName: \"kubernetes.io/projected/f0d97192-cb28-436d-adc6-a3aafd8aad46-kube-api-access-fd5c7\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.391116 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-config\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.391139 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.391343 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.392076 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.417015 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd5c7\" (UniqueName: \"kubernetes.io/projected/f0d97192-cb28-436d-adc6-a3aafd8aad46-kube-api-access-fd5c7\") pod \"dnsmasq-dns-86db49b7ff-tsjl9\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.440411 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.451540 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.451595 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.521227 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.647740 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dmttf"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.697349 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-m5tkl"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.868670 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-m5tkl"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.934599 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-xxb2g"] Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.936848 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:57 crc kubenswrapper[4721]: I0128 18:53:57.949621 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xxb2g"] Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.041843 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.041907 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs857\" (UniqueName: \"kubernetes.io/projected/69738eb9-4e39-4dae-9c2e-4f0f0e214938-kube-api-access-qs857\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.041934 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-dns-svc\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.042078 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.042102 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-config\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.145024 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.145117 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-config\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.145187 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.145235 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs857\" (UniqueName: \"kubernetes.io/projected/69738eb9-4e39-4dae-9c2e-4f0f0e214938-kube-api-access-qs857\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.145277 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-dns-svc\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.146582 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-config\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.146596 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.146959 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-dns-svc\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.147151 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.178595 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs857\" (UniqueName: \"kubernetes.io/projected/69738eb9-4e39-4dae-9c2e-4f0f0e214938-kube-api-access-qs857\") pod \"dnsmasq-dns-698758b865-xxb2g\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.217260 4721 generic.go:334] "Generic (PLEG): container finished" podID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerID="1e81e6a440865f58a8a00ad6f945396888eecb4ac069b57fdc0548d00edf0fdf" exitCode=0 Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.217350 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" event={"ID":"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88","Type":"ContainerDied","Data":"1e81e6a440865f58a8a00ad6f945396888eecb4ac069b57fdc0548d00edf0fdf"} Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.218875 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" event={"ID":"b02cc010-e156-405e-aac3-45c2afa254ac","Type":"ContainerStarted","Data":"4e34ad2b57d83840a71e4d68af4386188b91d2c44a9152989d9dd4205f25bcfa"} Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.220426 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dmttf" event={"ID":"bacb5ba4-39a7-4774-818d-67453153a34f","Type":"ContainerStarted","Data":"e09ebec3326a44b913b5a633adaf5b134d4dfe9b771550eb29099b5ddd3e8566"} Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.222486 4721 generic.go:334] "Generic (PLEG): container finished" podID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerID="1e3b225685548a877b08564af4b407871263c0c268b69d5776a7e29398768945" exitCode=0 Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.223035 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" event={"ID":"aecb4886-3e12-46f5-b2dd-20260e64e4c7","Type":"ContainerDied","Data":"1e3b225685548a877b08564af4b407871263c0c268b69d5776a7e29398768945"} Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.246012 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tsjl9"] Jan 28 18:53:58 crc kubenswrapper[4721]: W0128 18:53:58.257847 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0d97192_cb28_436d_adc6_a3aafd8aad46.slice/crio-75c50c6d613af4923a937cf05fe5034952d45aa7b87e2a93cedcbc7e150e9ea3 WatchSource:0}: Error finding container 75c50c6d613af4923a937cf05fe5034952d45aa7b87e2a93cedcbc7e150e9ea3: Status 404 returned error can't find the container with id 75c50c6d613af4923a937cf05fe5034952d45aa7b87e2a93cedcbc7e150e9ea3 Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.293213 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.324003 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.721197 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.723891 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.727883 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.729875 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ppfxh" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.730048 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.730238 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.748809 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862076 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862204 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvcx\" (UniqueName: \"kubernetes.io/projected/5296300e-265b-4671-a299-e023295c6981-kube-api-access-npvcx\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862254 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862296 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5296300e-265b-4671-a299-e023295c6981-config\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862489 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5296300e-265b-4671-a299-e023295c6981-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862755 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.862869 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5296300e-265b-4671-a299-e023295c6981-scripts\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.880089 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xxb2g"] Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.970565 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971007 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5296300e-265b-4671-a299-e023295c6981-scripts\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971055 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971102 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvcx\" (UniqueName: \"kubernetes.io/projected/5296300e-265b-4671-a299-e023295c6981-kube-api-access-npvcx\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971225 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5296300e-265b-4671-a299-e023295c6981-config\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971287 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5296300e-265b-4671-a299-e023295c6981-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.971870 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5296300e-265b-4671-a299-e023295c6981-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.977559 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5296300e-265b-4671-a299-e023295c6981-scripts\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.978893 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5296300e-265b-4671-a299-e023295c6981-config\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.985249 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:58 crc kubenswrapper[4721]: I0128 18:53:58.992436 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.024880 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5296300e-265b-4671-a299-e023295c6981-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.026163 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvcx\" (UniqueName: \"kubernetes.io/projected/5296300e-265b-4671-a299-e023295c6981-kube-api-access-npvcx\") pod \"ovn-northd-0\" (UID: \"5296300e-265b-4671-a299-e023295c6981\") " pod="openstack/ovn-northd-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.051957 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.134782 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.151635 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.177682 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.178032 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.185017 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.189003 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-l8276" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.189256 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.252162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xxb2g" event={"ID":"69738eb9-4e39-4dae-9c2e-4f0f0e214938","Type":"ContainerStarted","Data":"315de4d90767ff47678ac5734a8c6e4bbd69487ffbda1a8c60efadb1a15ba766"} Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.257393 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" event={"ID":"f0d97192-cb28-436d-adc6-a3aafd8aad46","Type":"ContainerStarted","Data":"75c50c6d613af4923a937cf05fe5034952d45aa7b87e2a93cedcbc7e150e9ea3"} Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.282723 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.282826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa657a81-842e-4292-a71e-e208b4c0bd69-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.282866 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.282946 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aa657a81-842e-4292-a71e-e208b4c0bd69-cache\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.282979 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2mq\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-kube-api-access-hk2mq\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.283139 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aa657a81-842e-4292-a71e-e208b4c0bd69-lock\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.385702 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa657a81-842e-4292-a71e-e208b4c0bd69-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.386190 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.386315 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aa657a81-842e-4292-a71e-e208b4c0bd69-cache\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.386352 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2mq\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-kube-api-access-hk2mq\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.386605 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aa657a81-842e-4292-a71e-e208b4c0bd69-lock\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.386752 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: E0128 18:53:59.388000 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:53:59 crc kubenswrapper[4721]: E0128 18:53:59.388033 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:53:59 crc kubenswrapper[4721]: E0128 18:53:59.388088 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:53:59.888064915 +0000 UTC m=+1205.613370475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.388337 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aa657a81-842e-4292-a71e-e208b4c0bd69-cache\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.389802 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aa657a81-842e-4292-a71e-e208b4c0bd69-lock\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.399029 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.399078 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dc8380aad2d7976805697e5c0192c612a2fce19da660e1abdb71e3cd47f96291/globalmount\"" pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.400139 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa657a81-842e-4292-a71e-e208b4c0bd69-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.409068 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2mq\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-kube-api-access-hk2mq\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.448709 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef2c0ae3-cc6e-4476-9710-b9510e78a556\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.689035 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:53:59 crc kubenswrapper[4721]: W0128 18:53:59.697233 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5296300e_265b_4671_a299_e023295c6981.slice/crio-9707e1163245b9fd2bf619f1c8ca7c9c1a57f315b1fb4769d857fa681ab74c4c WatchSource:0}: Error finding container 9707e1163245b9fd2bf619f1c8ca7c9c1a57f315b1fb4769d857fa681ab74c4c: Status 404 returned error can't find the container with id 9707e1163245b9fd2bf619f1c8ca7c9c1a57f315b1fb4769d857fa681ab74c4c Jan 28 18:53:59 crc kubenswrapper[4721]: I0128 18:53:59.906409 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:53:59 crc kubenswrapper[4721]: E0128 18:53:59.906734 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:53:59 crc kubenswrapper[4721]: E0128 18:53:59.907075 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:53:59 crc kubenswrapper[4721]: E0128 18:53:59.907145 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:54:00.90712514 +0000 UTC m=+1206.632430700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:54:00 crc kubenswrapper[4721]: I0128 18:54:00.266772 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5296300e-265b-4671-a299-e023295c6981","Type":"ContainerStarted","Data":"9707e1163245b9fd2bf619f1c8ca7c9c1a57f315b1fb4769d857fa681ab74c4c"} Jan 28 18:54:00 crc kubenswrapper[4721]: I0128 18:54:00.931288 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:54:00 crc kubenswrapper[4721]: E0128 18:54:00.931569 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:54:00 crc kubenswrapper[4721]: E0128 18:54:00.931598 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:54:00 crc kubenswrapper[4721]: E0128 18:54:00.931673 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:54:02.931649276 +0000 UTC m=+1208.656954836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:54:01 crc kubenswrapper[4721]: I0128 18:54:01.278711 4721 generic.go:334] "Generic (PLEG): container finished" podID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerID="e1472b3e544be64b3e29964dc712e9e0c6c5bd0aeed58e5b9bad95265232217c" exitCode=0 Jan 28 18:54:01 crc kubenswrapper[4721]: I0128 18:54:01.278802 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerDied","Data":"e1472b3e544be64b3e29964dc712e9e0c6c5bd0aeed58e5b9bad95265232217c"} Jan 28 18:54:01 crc kubenswrapper[4721]: I0128 18:54:01.733346 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: connect: connection refused" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.234023 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.309602 4721 generic.go:334] "Generic (PLEG): container finished" podID="95a1b67a-adb0-42f1-9fb8-32b01c443ede" containerID="148a777a9577ad48cb4694a4b71fbafbdb89f313a36e6a9a6ff343e1d0a369f1" exitCode=0 Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.309682 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"95a1b67a-adb0-42f1-9fb8-32b01c443ede","Type":"ContainerDied","Data":"148a777a9577ad48cb4694a4b71fbafbdb89f313a36e6a9a6ff343e1d0a369f1"} Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.903947 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-f8mwn"] Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.906071 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.908119 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.908674 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.908894 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.928450 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-f8mwn"] Jan 28 18:54:02 crc kubenswrapper[4721]: E0128 18:54:02.933983 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dmll7 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dmll7 ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-f8mwn" podUID="b6c23490-aa3a-4e35-8577-f4bb2581fb39" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.962757 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-f8mwn"] Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.971008 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7bhzw"] Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.973022 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974516 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974579 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmll7\" (UniqueName: \"kubernetes.io/projected/b6c23490-aa3a-4e35-8577-f4bb2581fb39-kube-api-access-dmll7\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974609 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6c23490-aa3a-4e35-8577-f4bb2581fb39-etc-swift\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974632 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-ring-data-devices\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: E0128 18:54:02.974726 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:54:02 crc kubenswrapper[4721]: E0128 18:54:02.974749 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:54:02 crc kubenswrapper[4721]: E0128 18:54:02.974807 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:54:06.974783891 +0000 UTC m=+1212.700089451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974731 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-dispersionconf\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974863 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-scripts\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974879 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-combined-ca-bundle\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.974898 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-swiftconf\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:02 crc kubenswrapper[4721]: I0128 18:54:02.979314 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7bhzw"] Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077097 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d06bcf83-999f-419a-9f4f-4e6544576897-etc-swift\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077178 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmll7\" (UniqueName: \"kubernetes.io/projected/b6c23490-aa3a-4e35-8577-f4bb2581fb39-kube-api-access-dmll7\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6c23490-aa3a-4e35-8577-f4bb2581fb39-etc-swift\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077246 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-ring-data-devices\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077296 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-dispersionconf\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077351 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-swiftconf\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077375 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-ring-data-devices\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077421 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-scripts\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077453 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7z4j\" (UniqueName: \"kubernetes.io/projected/d06bcf83-999f-419a-9f4f-4e6544576897-kube-api-access-g7z4j\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077479 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-dispersionconf\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077507 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-scripts\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077525 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-combined-ca-bundle\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077544 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-swiftconf\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.077595 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-combined-ca-bundle\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.078484 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6c23490-aa3a-4e35-8577-f4bb2581fb39-etc-swift\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.078988 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-scripts\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.078998 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-ring-data-devices\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.084367 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-combined-ca-bundle\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.086549 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-swiftconf\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.090388 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-dispersionconf\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.095772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmll7\" (UniqueName: \"kubernetes.io/projected/b6c23490-aa3a-4e35-8577-f4bb2581fb39-kube-api-access-dmll7\") pod \"swift-ring-rebalance-f8mwn\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180222 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-dispersionconf\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180335 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-swiftconf\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180370 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-ring-data-devices\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180431 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-scripts\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180481 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7z4j\" (UniqueName: \"kubernetes.io/projected/d06bcf83-999f-419a-9f4f-4e6544576897-kube-api-access-g7z4j\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180599 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-combined-ca-bundle\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.180650 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d06bcf83-999f-419a-9f4f-4e6544576897-etc-swift\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.181290 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-ring-data-devices\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.181351 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d06bcf83-999f-419a-9f4f-4e6544576897-etc-swift\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.181782 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-scripts\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.184427 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-swiftconf\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.184848 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-combined-ca-bundle\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.192428 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-dispersionconf\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.200864 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7z4j\" (UniqueName: \"kubernetes.io/projected/d06bcf83-999f-419a-9f4f-4e6544576897-kube-api-access-g7z4j\") pod \"swift-ring-rebalance-7bhzw\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.300049 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.316815 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.372774 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.485587 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmll7\" (UniqueName: \"kubernetes.io/projected/b6c23490-aa3a-4e35-8577-f4bb2581fb39-kube-api-access-dmll7\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486235 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6c23490-aa3a-4e35-8577-f4bb2581fb39-etc-swift\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486275 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-swiftconf\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486320 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-ring-data-devices\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486339 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-combined-ca-bundle\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486393 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-scripts\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486420 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-dispersionconf\") pod \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\" (UID: \"b6c23490-aa3a-4e35-8577-f4bb2581fb39\") " Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486922 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c23490-aa3a-4e35-8577-f4bb2581fb39-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.487026 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-scripts" (OuterVolumeSpecName: "scripts") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.486944 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.487416 4721 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6c23490-aa3a-4e35-8577-f4bb2581fb39-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.492407 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.492457 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.493867 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c23490-aa3a-4e35-8577-f4bb2581fb39-kube-api-access-dmll7" (OuterVolumeSpecName: "kube-api-access-dmll7") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "kube-api-access-dmll7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.495208 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b6c23490-aa3a-4e35-8577-f4bb2581fb39" (UID: "b6c23490-aa3a-4e35-8577-f4bb2581fb39"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.590105 4721 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.590137 4721 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.590150 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.590159 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6c23490-aa3a-4e35-8577-f4bb2581fb39-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.590187 4721 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6c23490-aa3a-4e35-8577-f4bb2581fb39-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.590197 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmll7\" (UniqueName: \"kubernetes.io/projected/b6c23490-aa3a-4e35-8577-f4bb2581fb39-kube-api-access-dmll7\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:03 crc kubenswrapper[4721]: I0128 18:54:03.803862 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7bhzw"] Jan 28 18:54:03 crc kubenswrapper[4721]: W0128 18:54:03.809900 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd06bcf83_999f_419a_9f4f_4e6544576897.slice/crio-d3033583097ddc5adca805fee37257f88854f57b2c4e1333946640414861b995 WatchSource:0}: Error finding container d3033583097ddc5adca805fee37257f88854f57b2c4e1333946640414861b995: Status 404 returned error can't find the container with id d3033583097ddc5adca805fee37257f88854f57b2c4e1333946640414861b995 Jan 28 18:54:04 crc kubenswrapper[4721]: I0128 18:54:04.326380 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f8mwn" Jan 28 18:54:04 crc kubenswrapper[4721]: I0128 18:54:04.326373 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7bhzw" event={"ID":"d06bcf83-999f-419a-9f4f-4e6544576897","Type":"ContainerStarted","Data":"d3033583097ddc5adca805fee37257f88854f57b2c4e1333946640414861b995"} Jan 28 18:54:04 crc kubenswrapper[4721]: I0128 18:54:04.370959 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-f8mwn"] Jan 28 18:54:04 crc kubenswrapper[4721]: I0128 18:54:04.386370 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-f8mwn"] Jan 28 18:54:05 crc kubenswrapper[4721]: I0128 18:54:05.541709 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c23490-aa3a-4e35-8577-f4bb2581fb39" path="/var/lib/kubelet/pods/b6c23490-aa3a-4e35-8577-f4bb2581fb39/volumes" Jan 28 18:54:06 crc kubenswrapper[4721]: I0128 18:54:06.731277 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: connect: connection refused" Jan 28 18:54:07 crc kubenswrapper[4721]: I0128 18:54:07.064792 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:54:07 crc kubenswrapper[4721]: E0128 18:54:07.065494 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:54:07 crc kubenswrapper[4721]: E0128 18:54:07.065659 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:54:07 crc kubenswrapper[4721]: E0128 18:54:07.065770 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:54:15.065735147 +0000 UTC m=+1220.791040707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:54:07 crc kubenswrapper[4721]: I0128 18:54:07.231435 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Jan 28 18:54:07 crc kubenswrapper[4721]: I0128 18:54:07.386399 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-gzhlc" Jan 28 18:54:07 crc kubenswrapper[4721]: I0128 18:54:07.543119 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-4gfwq" Jan 28 18:54:07 crc kubenswrapper[4721]: I0128 18:54:07.697743 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cd79j" Jan 28 18:54:08 crc kubenswrapper[4721]: I0128 18:54:08.887463 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 28 18:54:08 crc kubenswrapper[4721]: I0128 18:54:08.928995 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 28 18:54:09 crc kubenswrapper[4721]: I0128 18:54:09.368494 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" event={"ID":"dffa61ba-c98d-446a-a4d0-34e1e15a093b","Type":"ContainerStarted","Data":"44ce41ada7681a86f9447d85ccca746fd26dcd0d69ce43a43ec00883a66c88e5"} Jan 28 18:54:11 crc kubenswrapper[4721]: I0128 18:54:11.384071 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:54:11 crc kubenswrapper[4721]: I0128 18:54:11.393894 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" Jan 28 18:54:11 crc kubenswrapper[4721]: I0128 18:54:11.406579 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-b6984" podStartSLOduration=-9223371982.448223 podStartE2EDuration="54.406552873s" podCreationTimestamp="2026-01-28 18:53:17 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.821320109 +0000 UTC m=+1186.546625669" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:11.40323328 +0000 UTC m=+1217.128538840" watchObservedRunningTime="2026-01-28 18:54:11.406552873 +0000 UTC m=+1217.131858433" Jan 28 18:54:11 crc kubenswrapper[4721]: I0128 18:54:11.731655 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: connect: connection refused" Jan 28 18:54:12 crc kubenswrapper[4721]: I0128 18:54:12.230392 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Jan 28 18:54:15 crc kubenswrapper[4721]: I0128 18:54:15.149589 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:54:15 crc kubenswrapper[4721]: E0128 18:54:15.150341 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:54:15 crc kubenswrapper[4721]: E0128 18:54:15.150664 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:54:15 crc kubenswrapper[4721]: E0128 18:54:15.150740 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:54:31.150712817 +0000 UTC m=+1236.876018377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:54:15 crc kubenswrapper[4721]: I0128 18:54:15.868775 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 18:54:15 crc kubenswrapper[4721]: I0128 18:54:15.965353 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="00b26873-8c7a-4ea7-b334-873b01cc5d84" containerName="galera" probeResult="failure" output=< Jan 28 18:54:15 crc kubenswrapper[4721]: wsrep_local_state_comment (Joined) differs from Synced Jan 28 18:54:15 crc kubenswrapper[4721]: > Jan 28 18:54:16 crc kubenswrapper[4721]: I0128 18:54:16.730786 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: connect: connection refused" Jan 28 18:54:17 crc kubenswrapper[4721]: I0128 18:54:17.231327 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Jan 28 18:54:17 crc kubenswrapper[4721]: I0128 18:54:17.329973 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 18:54:17 crc kubenswrapper[4721]: I0128 18:54:17.419656 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0e740af0-cd0c-4f3e-8be1-facce1656583" containerName="galera" probeResult="failure" output=< Jan 28 18:54:17 crc kubenswrapper[4721]: wsrep_local_state_comment (Joined) differs from Synced Jan 28 18:54:17 crc kubenswrapper[4721]: > Jan 28 18:54:18 crc kubenswrapper[4721]: I0128 18:54:18.459763 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dmttf" event={"ID":"bacb5ba4-39a7-4774-818d-67453153a34f","Type":"ContainerStarted","Data":"e8ce92d16f35fc795cf934d249898453edc8fff6d09120344159bc6f9f5c498f"} Jan 28 18:54:18 crc kubenswrapper[4721]: I0128 18:54:18.462685 4721 generic.go:334] "Generic (PLEG): container finished" podID="b02cc010-e156-405e-aac3-45c2afa254ac" containerID="c03ee72a8b18b82c44802b661ca7fc04b3039f3c3a94468e3e07d22479fd07b1" exitCode=0 Jan 28 18:54:18 crc kubenswrapper[4721]: I0128 18:54:18.462822 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" event={"ID":"b02cc010-e156-405e-aac3-45c2afa254ac","Type":"ContainerDied","Data":"c03ee72a8b18b82c44802b661ca7fc04b3039f3c3a94468e3e07d22479fd07b1"} Jan 28 18:54:18 crc kubenswrapper[4721]: I0128 18:54:18.491164 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dmttf" podStartSLOduration=22.491133585 podStartE2EDuration="22.491133585s" podCreationTimestamp="2026-01-28 18:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:18.486381845 +0000 UTC m=+1224.211687425" watchObservedRunningTime="2026-01-28 18:54:18.491133585 +0000 UTC m=+1224.216439145" Jan 28 18:54:18 crc kubenswrapper[4721]: I0128 18:54:18.948424 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="742e65f6-66eb-4334-9328-b77d47d420d0" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:54:19 crc kubenswrapper[4721]: I0128 18:54:19.472855 4721 generic.go:334] "Generic (PLEG): container finished" podID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerID="a0f4e8b7b4c9005f03034e555cd6e1fee8a76df36f415f9caf9100ad3f1b839e" exitCode=0 Jan 28 18:54:19 crc kubenswrapper[4721]: I0128 18:54:19.472976 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" event={"ID":"f0d97192-cb28-436d-adc6-a3aafd8aad46","Type":"ContainerDied","Data":"a0f4e8b7b4c9005f03034e555cd6e1fee8a76df36f415f9caf9100ad3f1b839e"} Jan 28 18:54:20 crc kubenswrapper[4721]: I0128 18:54:20.486638 4721 generic.go:334] "Generic (PLEG): container finished" podID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerID="fbe2882cce713417850b2a070e822a818e0dad47466e5ed8f599f66fa217dacb" exitCode=0 Jan 28 18:54:20 crc kubenswrapper[4721]: I0128 18:54:20.487042 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xxb2g" event={"ID":"69738eb9-4e39-4dae-9c2e-4f0f0e214938","Type":"ContainerDied","Data":"fbe2882cce713417850b2a070e822a818e0dad47466e5ed8f599f66fa217dacb"} Jan 28 18:54:21 crc kubenswrapper[4721]: I0128 18:54:21.721320 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sbclw" podUID="c391bae1-d3a9-4ccd-a868-d7263d9b0a28" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:54:21 crc kubenswrapper[4721]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:54:21 crc kubenswrapper[4721]: > Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.518304 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" event={"ID":"aecb4886-3e12-46f5-b2dd-20260e64e4c7","Type":"ContainerDied","Data":"a40c760cb73f67b96a0da729d622380413a6e5bec210167a9091a79462284907"} Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.518936 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40c760cb73f67b96a0da729d622380413a6e5bec210167a9091a79462284907" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.521295 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" event={"ID":"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88","Type":"ContainerDied","Data":"1b3942323fd7702c2b2a09db8e64a799961984f8dc00591f6e787a8604665da6"} Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.521352 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b3942323fd7702c2b2a09db8e64a799961984f8dc00591f6e787a8604665da6" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.522750 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" event={"ID":"b02cc010-e156-405e-aac3-45c2afa254ac","Type":"ContainerDied","Data":"4e34ad2b57d83840a71e4d68af4386188b91d2c44a9152989d9dd4205f25bcfa"} Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.522793 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e34ad2b57d83840a71e4d68af4386188b91d2c44a9152989d9dd4205f25bcfa" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.566592 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.577667 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.591306 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.657651 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl78z\" (UniqueName: \"kubernetes.io/projected/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-kube-api-access-xl78z\") pod \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.657776 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-config\") pod \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.657878 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-dns-svc\") pod \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\" (UID: \"dafbdcb9-9fbe-40c2-920d-6111bf0e2d88\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.678437 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-kube-api-access-xl78z" (OuterVolumeSpecName: "kube-api-access-xl78z") pod "dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" (UID: "dafbdcb9-9fbe-40c2-920d-6111bf0e2d88"). InnerVolumeSpecName "kube-api-access-xl78z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.705669 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" (UID: "dafbdcb9-9fbe-40c2-920d-6111bf0e2d88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.706250 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-config" (OuterVolumeSpecName: "config") pod "dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" (UID: "dafbdcb9-9fbe-40c2-920d-6111bf0e2d88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760330 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvjw5\" (UniqueName: \"kubernetes.io/projected/b02cc010-e156-405e-aac3-45c2afa254ac-kube-api-access-hvjw5\") pod \"b02cc010-e156-405e-aac3-45c2afa254ac\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760392 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-config\") pod \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760421 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-dns-svc\") pod \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760562 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfjld\" (UniqueName: \"kubernetes.io/projected/aecb4886-3e12-46f5-b2dd-20260e64e4c7-kube-api-access-tfjld\") pod \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\" (UID: \"aecb4886-3e12-46f5-b2dd-20260e64e4c7\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760627 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-dns-svc\") pod \"b02cc010-e156-405e-aac3-45c2afa254ac\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-ovsdbserver-sb\") pod \"b02cc010-e156-405e-aac3-45c2afa254ac\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.760736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-config\") pod \"b02cc010-e156-405e-aac3-45c2afa254ac\" (UID: \"b02cc010-e156-405e-aac3-45c2afa254ac\") " Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.761459 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl78z\" (UniqueName: \"kubernetes.io/projected/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-kube-api-access-xl78z\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.761485 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.761495 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.765623 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecb4886-3e12-46f5-b2dd-20260e64e4c7-kube-api-access-tfjld" (OuterVolumeSpecName: "kube-api-access-tfjld") pod "aecb4886-3e12-46f5-b2dd-20260e64e4c7" (UID: "aecb4886-3e12-46f5-b2dd-20260e64e4c7"). InnerVolumeSpecName "kube-api-access-tfjld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.766772 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b02cc010-e156-405e-aac3-45c2afa254ac-kube-api-access-hvjw5" (OuterVolumeSpecName: "kube-api-access-hvjw5") pod "b02cc010-e156-405e-aac3-45c2afa254ac" (UID: "b02cc010-e156-405e-aac3-45c2afa254ac"). InnerVolumeSpecName "kube-api-access-hvjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.795001 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b02cc010-e156-405e-aac3-45c2afa254ac" (UID: "b02cc010-e156-405e-aac3-45c2afa254ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.798261 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b02cc010-e156-405e-aac3-45c2afa254ac" (UID: "b02cc010-e156-405e-aac3-45c2afa254ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.807431 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-config" (OuterVolumeSpecName: "config") pod "b02cc010-e156-405e-aac3-45c2afa254ac" (UID: "b02cc010-e156-405e-aac3-45c2afa254ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.815556 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aecb4886-3e12-46f5-b2dd-20260e64e4c7" (UID: "aecb4886-3e12-46f5-b2dd-20260e64e4c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.817283 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-config" (OuterVolumeSpecName: "config") pod "aecb4886-3e12-46f5-b2dd-20260e64e4c7" (UID: "aecb4886-3e12-46f5-b2dd-20260e64e4c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863787 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvjw5\" (UniqueName: \"kubernetes.io/projected/b02cc010-e156-405e-aac3-45c2afa254ac-kube-api-access-hvjw5\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863832 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863842 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecb4886-3e12-46f5-b2dd-20260e64e4c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863852 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfjld\" (UniqueName: \"kubernetes.io/projected/aecb4886-3e12-46f5-b2dd-20260e64e4c7-kube-api-access-tfjld\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863863 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863874 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:23 crc kubenswrapper[4721]: I0128 18:54:23.863883 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b02cc010-e156-405e-aac3-45c2afa254ac-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.534482 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.534605 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-m5tkl" Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.534605 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.618207 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-m5tkl"] Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.628039 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-m5tkl"] Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.629087 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.639245 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-59s7w"] Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.660440 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-59s7w"] Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.675245 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vc8rk"] Jan 28 18:54:24 crc kubenswrapper[4721]: I0128 18:54:24.686951 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vc8rk"] Jan 28 18:54:24 crc kubenswrapper[4721]: E0128 18:54:24.736215 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" Jan 28 18:54:24 crc kubenswrapper[4721]: E0128 18:54:24.736482 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:c4816136-5409-419d-b21e-d9a4fe2c1c49,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7z4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-7bhzw_openstack(d06bcf83-999f-419a-9f4f-4e6544576897): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:54:24 crc kubenswrapper[4721]: E0128 18:54:24.737649 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-7bhzw" podUID="d06bcf83-999f-419a-9f4f-4e6544576897" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.541634 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" path="/var/lib/kubelet/pods/aecb4886-3e12-46f5-b2dd-20260e64e4c7/volumes" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.542711 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b02cc010-e156-405e-aac3-45c2afa254ac" path="/var/lib/kubelet/pods/b02cc010-e156-405e-aac3-45c2afa254ac/volumes" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.543985 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" path="/var/lib/kubelet/pods/dafbdcb9-9fbe-40c2-920d-6111bf0e2d88/volumes" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.547964 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xxb2g" event={"ID":"69738eb9-4e39-4dae-9c2e-4f0f0e214938","Type":"ContainerStarted","Data":"bf4db4b5723a0cce5ab54a03821e9849b73271571bbc2b763dbe4c63e29bbb93"} Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.548445 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.552063 4721 generic.go:334] "Generic (PLEG): container finished" podID="dc56a986-671d-4f17-8386-939d0fd9394a" containerID="f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1" exitCode=0 Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.552096 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc56a986-671d-4f17-8386-939d0fd9394a","Type":"ContainerDied","Data":"f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1"} Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.556251 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" event={"ID":"f0d97192-cb28-436d-adc6-a3aafd8aad46","Type":"ContainerStarted","Data":"bc7e174e68ffbb135ab09e3a6e5fb466556062db0ef3a8bf819fd716ee75696a"} Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.556296 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:54:25 crc kubenswrapper[4721]: E0128 18:54:25.557473 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified\\\"\"" pod="openstack/swift-ring-rebalance-7bhzw" podUID="d06bcf83-999f-419a-9f4f-4e6544576897" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.573253 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-xxb2g" podStartSLOduration=28.573229501 podStartE2EDuration="28.573229501s" podCreationTimestamp="2026-01-28 18:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:25.565717535 +0000 UTC m=+1231.291023105" watchObservedRunningTime="2026-01-28 18:54:25.573229501 +0000 UTC m=+1231.298535061" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.650902 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" podStartSLOduration=28.650874617 podStartE2EDuration="28.650874617s" podCreationTimestamp="2026-01-28 18:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:25.613846841 +0000 UTC m=+1231.339152401" watchObservedRunningTime="2026-01-28 18:54:25.650874617 +0000 UTC m=+1231.376180167" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.750446 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gm24k"] Jan 28 18:54:25 crc kubenswrapper[4721]: E0128 18:54:25.751411 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751436 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" Jan 28 18:54:25 crc kubenswrapper[4721]: E0128 18:54:25.751460 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b02cc010-e156-405e-aac3-45c2afa254ac" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751469 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b02cc010-e156-405e-aac3-45c2afa254ac" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: E0128 18:54:25.751488 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751496 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: E0128 18:54:25.751510 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751517 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" Jan 28 18:54:25 crc kubenswrapper[4721]: E0128 18:54:25.751536 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751543 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751762 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b02cc010-e156-405e-aac3-45c2afa254ac" containerName="init" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751781 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.751799 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.752839 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.759681 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gm24k"] Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.789028 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.871396 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57fd-account-create-update-g9drk"] Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.872723 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.876689 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.892148 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57fd-account-create-update-g9drk"] Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.947039 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f5f923-3ee7-4416-bba4-03d51578c8c4-operator-scripts\") pod \"keystone-db-create-gm24k\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.947285 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lvd\" (UniqueName: \"kubernetes.io/projected/80f5f923-3ee7-4416-bba4-03d51578c8c4-kube-api-access-w6lvd\") pod \"keystone-db-create-gm24k\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.968135 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kr7q2"] Jan 28 18:54:25 crc kubenswrapper[4721]: I0128 18:54:25.969856 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.015611 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kr7q2"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.050453 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lvd\" (UniqueName: \"kubernetes.io/projected/80f5f923-3ee7-4416-bba4-03d51578c8c4-kube-api-access-w6lvd\") pod \"keystone-db-create-gm24k\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.050638 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f5f923-3ee7-4416-bba4-03d51578c8c4-operator-scripts\") pod \"keystone-db-create-gm24k\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.050679 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f959669-d607-4e65-9b7a-50f0a5d73c6a-operator-scripts\") pod \"keystone-57fd-account-create-update-g9drk\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.050756 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwtr\" (UniqueName: \"kubernetes.io/projected/4f959669-d607-4e65-9b7a-50f0a5d73c6a-kube-api-access-slwtr\") pod \"keystone-57fd-account-create-update-g9drk\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.052861 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f5f923-3ee7-4416-bba4-03d51578c8c4-operator-scripts\") pod \"keystone-db-create-gm24k\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.085371 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9597-account-create-update-7bj94"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.087355 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.091429 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.094978 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lvd\" (UniqueName: \"kubernetes.io/projected/80f5f923-3ee7-4416-bba4-03d51578c8c4-kube-api-access-w6lvd\") pod \"keystone-db-create-gm24k\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.095641 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9597-account-create-update-7bj94"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.111920 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.152789 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwtr\" (UniqueName: \"kubernetes.io/projected/4f959669-d607-4e65-9b7a-50f0a5d73c6a-kube-api-access-slwtr\") pod \"keystone-57fd-account-create-update-g9drk\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.153413 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr5tt\" (UniqueName: \"kubernetes.io/projected/af20b569-c763-4033-8b7b-df1ce95dcba2-kube-api-access-xr5tt\") pod \"placement-db-create-kr7q2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.153643 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af20b569-c763-4033-8b7b-df1ce95dcba2-operator-scripts\") pod \"placement-db-create-kr7q2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.155676 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f959669-d607-4e65-9b7a-50f0a5d73c6a-operator-scripts\") pod \"keystone-57fd-account-create-update-g9drk\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.156652 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f959669-d607-4e65-9b7a-50f0a5d73c6a-operator-scripts\") pod \"keystone-57fd-account-create-update-g9drk\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.171975 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwtr\" (UniqueName: \"kubernetes.io/projected/4f959669-d607-4e65-9b7a-50f0a5d73c6a-kube-api-access-slwtr\") pod \"keystone-57fd-account-create-update-g9drk\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.198305 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.258160 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af20b569-c763-4033-8b7b-df1ce95dcba2-operator-scripts\") pod \"placement-db-create-kr7q2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.258265 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzlb\" (UniqueName: \"kubernetes.io/projected/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-kube-api-access-hxzlb\") pod \"placement-9597-account-create-update-7bj94\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.258306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-operator-scripts\") pod \"placement-9597-account-create-update-7bj94\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.258393 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr5tt\" (UniqueName: \"kubernetes.io/projected/af20b569-c763-4033-8b7b-df1ce95dcba2-kube-api-access-xr5tt\") pod \"placement-db-create-kr7q2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.259116 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af20b569-c763-4033-8b7b-df1ce95dcba2-operator-scripts\") pod \"placement-db-create-kr7q2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.279450 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr5tt\" (UniqueName: \"kubernetes.io/projected/af20b569-c763-4033-8b7b-df1ce95dcba2-kube-api-access-xr5tt\") pod \"placement-db-create-kr7q2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.360066 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxzlb\" (UniqueName: \"kubernetes.io/projected/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-kube-api-access-hxzlb\") pod \"placement-9597-account-create-update-7bj94\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.360141 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-operator-scripts\") pod \"placement-9597-account-create-update-7bj94\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.361150 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-operator-scripts\") pod \"placement-9597-account-create-update-7bj94\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.367925 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lbv9r"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.369930 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.377421 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lbv9r"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.399357 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxzlb\" (UniqueName: \"kubernetes.io/projected/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-kube-api-access-hxzlb\") pod \"placement-9597-account-create-update-7bj94\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.463929 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p945n\" (UniqueName: \"kubernetes.io/projected/eba9db5f-dcb9-460b-abdd-144249ee3c13-kube-api-access-p945n\") pod \"glance-db-create-lbv9r\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.464083 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba9db5f-dcb9-460b-abdd-144249ee3c13-operator-scripts\") pod \"glance-db-create-lbv9r\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.476384 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-feb7-account-create-update-hztgg"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.478689 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.482519 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.488246 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-feb7-account-create-update-hztgg"] Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.546995 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.554112 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.570751 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-operator-scripts\") pod \"glance-feb7-account-create-update-hztgg\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.570820 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p945n\" (UniqueName: \"kubernetes.io/projected/eba9db5f-dcb9-460b-abdd-144249ee3c13-kube-api-access-p945n\") pod \"glance-db-create-lbv9r\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.570883 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt2h\" (UniqueName: \"kubernetes.io/projected/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-kube-api-access-rtt2h\") pod \"glance-feb7-account-create-update-hztgg\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.570958 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba9db5f-dcb9-460b-abdd-144249ee3c13-operator-scripts\") pod \"glance-db-create-lbv9r\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.571909 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba9db5f-dcb9-460b-abdd-144249ee3c13-operator-scripts\") pod \"glance-db-create-lbv9r\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.580317 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5296300e-265b-4671-a299-e023295c6981","Type":"ContainerStarted","Data":"49f3fee236d97c43847b89af887b269f38588b73a3165856e846dfd70423fa2f"} Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.586187 4721 generic.go:334] "Generic (PLEG): container finished" podID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerID="1744104dd2c6db657749ff29714a2574a58c6368538f7d3e645044ef7a0b215d" exitCode=0 Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.586589 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec1e1de9-b144-4c34-bb14-4c0382670f45","Type":"ContainerDied","Data":"1744104dd2c6db657749ff29714a2574a58c6368538f7d3e645044ef7a0b215d"} Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.592612 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p945n\" (UniqueName: \"kubernetes.io/projected/eba9db5f-dcb9-460b-abdd-144249ee3c13-kube-api-access-p945n\") pod \"glance-db-create-lbv9r\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.673620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-operator-scripts\") pod \"glance-feb7-account-create-update-hztgg\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.673707 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtt2h\" (UniqueName: \"kubernetes.io/projected/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-kube-api-access-rtt2h\") pod \"glance-feb7-account-create-update-hztgg\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.676020 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-operator-scripts\") pod \"glance-feb7-account-create-update-hztgg\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.699251 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtt2h\" (UniqueName: \"kubernetes.io/projected/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-kube-api-access-rtt2h\") pod \"glance-feb7-account-create-update-hztgg\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.702627 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.731979 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-59s7w" podUID="dafbdcb9-9fbe-40c2-920d-6111bf0e2d88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: i/o timeout" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.749352 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sbclw" podUID="c391bae1-d3a9-4ccd-a868-d7263d9b0a28" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:54:26 crc kubenswrapper[4721]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:54:26 crc kubenswrapper[4721]: > Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.810459 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.853721 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:54:26 crc kubenswrapper[4721]: I0128 18:54:26.967183 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-djsj9" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.231089 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-vc8rk" podUID="aecb4886-3e12-46f5-b2dd-20260e64e4c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: i/o timeout" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.232978 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sbclw-config-48kml"] Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.234468 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.238153 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.252877 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sbclw-config-48kml"] Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.311487 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run-ovn\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.311592 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.311706 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzbg\" (UniqueName: \"kubernetes.io/projected/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-kube-api-access-rrzbg\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.311983 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-log-ovn\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.312044 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-additional-scripts\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.312091 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-scripts\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.415886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run-ovn\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.415965 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.416035 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrzbg\" (UniqueName: \"kubernetes.io/projected/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-kube-api-access-rrzbg\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.416587 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-log-ovn\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.416671 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-additional-scripts\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.416721 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-scripts\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.417107 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run-ovn\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.417487 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-log-ovn\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.417604 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.419030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-additional-scripts\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.421469 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-scripts\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.444469 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrzbg\" (UniqueName: \"kubernetes.io/projected/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-kube-api-access-rrzbg\") pod \"ovn-controller-sbclw-config-48kml\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.587348 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gm24k"] Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.593830 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.606082 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerStarted","Data":"365b89612323010992f8c935a6d68a6f1a2b9b8026b23b3f9697702e022b7a58"} Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.607440 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57fd-account-create-update-g9drk"] Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.609087 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc56a986-671d-4f17-8386-939d0fd9394a","Type":"ContainerStarted","Data":"8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed"} Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.610632 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.628560 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5296300e-265b-4671-a299-e023295c6981","Type":"ContainerStarted","Data":"fbdde6d6ff14229460a6018e4c6dd84956e134fb6cac858a6d0fe057495da380"} Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.636027 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"95a1b67a-adb0-42f1-9fb8-32b01c443ede","Type":"ContainerStarted","Data":"1bdd011a0b869969fca96403c768e3c4ccda3d0311ad9a84199999450c4aeb71"} Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.658087 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.696026632 podStartE2EDuration="1m26.65805957s" podCreationTimestamp="2026-01-28 18:53:01 +0000 UTC" firstStartedPulling="2026-01-28 18:53:04.040866896 +0000 UTC m=+1149.766172456" lastFinishedPulling="2026-01-28 18:53:49.002899834 +0000 UTC m=+1194.728205394" observedRunningTime="2026-01-28 18:54:27.653982842 +0000 UTC m=+1233.379288402" watchObservedRunningTime="2026-01-28 18:54:27.65805957 +0000 UTC m=+1233.383365140" Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.718832 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lbv9r"] Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.731298 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kr7q2"] Jan 28 18:54:27 crc kubenswrapper[4721]: I0128 18:54:27.745161 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9597-account-create-update-7bj94"] Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.480195 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-feb7-account-create-update-hztgg"] Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.649003 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9597-account-create-update-7bj94" event={"ID":"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0","Type":"ContainerStarted","Data":"bccddefc8eece815f05991a96b7da8c7a1f6a279c1db7387d2375ef54f7dbbcb"} Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.650428 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-feb7-account-create-update-hztgg" event={"ID":"5bea9fca-3e0f-4158-ba76-aa184abd2d4c","Type":"ContainerStarted","Data":"3beca7410ba245b3dcf5c7bcfdc7aa5cd28166c1cc6a91a29c567ddc4c982cea"} Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.652919 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lbv9r" event={"ID":"eba9db5f-dcb9-460b-abdd-144249ee3c13","Type":"ContainerStarted","Data":"5b724888183e52abca8df0535a830bd3b054b46471f344aa692c047e3290f3bf"} Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.665539 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kr7q2" event={"ID":"af20b569-c763-4033-8b7b-df1ce95dcba2","Type":"ContainerStarted","Data":"029637a173a312dfb09543d1feb4b5a32fde97d125386751707b9ab4b4d6df69"} Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.669235 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57fd-account-create-update-g9drk" event={"ID":"4f959669-d607-4e65-9b7a-50f0a5d73c6a","Type":"ContainerStarted","Data":"8088b8e91e24f6fc79a04dc0f3b4feebeb6bf4affbc10c64223d86b5dc5db14a"} Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.671535 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gm24k" event={"ID":"80f5f923-3ee7-4416-bba4-03d51578c8c4","Type":"ContainerStarted","Data":"bd63b9018df23448b7fe9500c6fea51cac0abe6ecc043e9c94ea9d136f066fdb"} Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.671956 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.700634 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.085453191 podStartE2EDuration="30.700610418s" podCreationTimestamp="2026-01-28 18:53:58 +0000 UTC" firstStartedPulling="2026-01-28 18:53:59.703331352 +0000 UTC m=+1205.428636912" lastFinishedPulling="2026-01-28 18:54:23.318488569 +0000 UTC m=+1229.043794139" observedRunningTime="2026-01-28 18:54:28.69620248 +0000 UTC m=+1234.421508070" watchObservedRunningTime="2026-01-28 18:54:28.700610418 +0000 UTC m=+1234.425915988" Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.881811 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sbclw-config-48kml"] Jan 28 18:54:28 crc kubenswrapper[4721]: I0128 18:54:28.947328 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="742e65f6-66eb-4334-9328-b77d47d420d0" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.687015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gm24k" event={"ID":"80f5f923-3ee7-4416-bba4-03d51578c8c4","Type":"ContainerStarted","Data":"bf47bae6ef3c1b70abd14a4b919bb993808a56697a57f141e8443ce15d6f7e9c"} Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.690683 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9597-account-create-update-7bj94" event={"ID":"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0","Type":"ContainerStarted","Data":"e49939efe79b61e57415a7d5e53c0f9cdab0563733da9d4be04b586be0385837"} Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.692943 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sbclw-config-48kml" event={"ID":"8bd7cf35-9697-4283-9f57-a7ed6f9311e6","Type":"ContainerStarted","Data":"69191ff4d12c50cb29dd3d36bedd882c7f4237dd0947319af69db39fb1da9b88"} Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.696026 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec1e1de9-b144-4c34-bb14-4c0382670f45","Type":"ContainerStarted","Data":"00a91f1b683af04ea057b66dac35f2c915e65c757dfb533f455f370c35f0e79a"} Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.696460 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.708274 4721 generic.go:334] "Generic (PLEG): container finished" podID="af20b569-c763-4033-8b7b-df1ce95dcba2" containerID="4c1f69bbe56a4bfefb6258d0b5d89ef49a79ac04222a53855a2000ea0e47f913" exitCode=0 Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.708420 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kr7q2" event={"ID":"af20b569-c763-4033-8b7b-df1ce95dcba2","Type":"ContainerDied","Data":"4c1f69bbe56a4bfefb6258d0b5d89ef49a79ac04222a53855a2000ea0e47f913"} Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.712822 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57fd-account-create-update-g9drk" event={"ID":"4f959669-d607-4e65-9b7a-50f0a5d73c6a","Type":"ContainerStarted","Data":"0998ac1f150838c3b179689502137d9643af4a583bef0e57d4c847266deaeb80"} Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.714733 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-gm24k" podStartSLOduration=4.714702971 podStartE2EDuration="4.714702971s" podCreationTimestamp="2026-01-28 18:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:29.703688024 +0000 UTC m=+1235.428993584" watchObservedRunningTime="2026-01-28 18:54:29.714702971 +0000 UTC m=+1235.440008531" Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.769131 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371948.085678 podStartE2EDuration="1m28.769097834s" podCreationTimestamp="2026-01-28 18:53:01 +0000 UTC" firstStartedPulling="2026-01-28 18:53:03.6930084 +0000 UTC m=+1149.418313960" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:29.745869663 +0000 UTC m=+1235.471175233" watchObservedRunningTime="2026-01-28 18:54:29.769097834 +0000 UTC m=+1235.494403394" Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.774361 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9597-account-create-update-7bj94" podStartSLOduration=3.774334479 podStartE2EDuration="3.774334479s" podCreationTimestamp="2026-01-28 18:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:29.760842814 +0000 UTC m=+1235.486148374" watchObservedRunningTime="2026-01-28 18:54:29.774334479 +0000 UTC m=+1235.499640039" Jan 28 18:54:29 crc kubenswrapper[4721]: I0128 18:54:29.791323 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57fd-account-create-update-g9drk" podStartSLOduration=4.791289873 podStartE2EDuration="4.791289873s" podCreationTimestamp="2026-01-28 18:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:29.778506491 +0000 UTC m=+1235.503812051" watchObservedRunningTime="2026-01-28 18:54:29.791289873 +0000 UTC m=+1235.516595463" Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.725350 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5e16ae9a-515f-4c11-a048-84aedad18b0a","Type":"ContainerStarted","Data":"07b630721084084b0f3264478c598ab08923b8a2ea289aed886aa6302d705158"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.726002 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.729662 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lbv9r" event={"ID":"eba9db5f-dcb9-460b-abdd-144249ee3c13","Type":"ContainerStarted","Data":"06307089ab5efe0f0f5f4ca6a469540d89bf820eb634359963970b1808cd407e"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.732805 4721 generic.go:334] "Generic (PLEG): container finished" podID="4f959669-d607-4e65-9b7a-50f0a5d73c6a" containerID="0998ac1f150838c3b179689502137d9643af4a583bef0e57d4c847266deaeb80" exitCode=0 Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.732886 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57fd-account-create-update-g9drk" event={"ID":"4f959669-d607-4e65-9b7a-50f0a5d73c6a","Type":"ContainerDied","Data":"0998ac1f150838c3b179689502137d9643af4a583bef0e57d4c847266deaeb80"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.735966 4721 generic.go:334] "Generic (PLEG): container finished" podID="80f5f923-3ee7-4416-bba4-03d51578c8c4" containerID="bf47bae6ef3c1b70abd14a4b919bb993808a56697a57f141e8443ce15d6f7e9c" exitCode=0 Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.736080 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gm24k" event={"ID":"80f5f923-3ee7-4416-bba4-03d51578c8c4","Type":"ContainerDied","Data":"bf47bae6ef3c1b70abd14a4b919bb993808a56697a57f141e8443ce15d6f7e9c"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.738112 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sbclw-config-48kml" event={"ID":"8bd7cf35-9697-4283-9f57-a7ed6f9311e6","Type":"ContainerStarted","Data":"cfc627ad0fc78c84a9e728d559b682afa1e87bec17599343ae60c1a5843ca673"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.741616 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerStarted","Data":"1e0b53a2f639a3be2e058e566db6a36ccce965fc3410944527bfcc44b65816a5"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.747681 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-feb7-account-create-update-hztgg" event={"ID":"5bea9fca-3e0f-4158-ba76-aa184abd2d4c","Type":"ContainerStarted","Data":"d90feaace03cfa5caa58fbf53981257232daa585103c8a7a6929b4e7b58b3581"} Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.757908 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=36.177641897 podStartE2EDuration="1m23.757877449s" podCreationTimestamp="2026-01-28 18:53:07 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.796721949 +0000 UTC m=+1186.522027509" lastFinishedPulling="2026-01-28 18:54:28.376957501 +0000 UTC m=+1234.102263061" observedRunningTime="2026-01-28 18:54:30.753352436 +0000 UTC m=+1236.478658016" watchObservedRunningTime="2026-01-28 18:54:30.757877449 +0000 UTC m=+1236.483183009" Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.828087 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-feb7-account-create-update-hztgg" podStartSLOduration=4.8280622300000005 podStartE2EDuration="4.82806223s" podCreationTimestamp="2026-01-28 18:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:30.820697238 +0000 UTC m=+1236.546002808" watchObservedRunningTime="2026-01-28 18:54:30.82806223 +0000 UTC m=+1236.553367790" Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.862764 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sbclw-config-48kml" podStartSLOduration=3.862727243 podStartE2EDuration="3.862727243s" podCreationTimestamp="2026-01-28 18:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:30.848761743 +0000 UTC m=+1236.574067313" watchObservedRunningTime="2026-01-28 18:54:30.862727243 +0000 UTC m=+1236.588032813" Jan 28 18:54:30 crc kubenswrapper[4721]: I0128 18:54:30.875356 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-lbv9r" podStartSLOduration=4.875330829 podStartE2EDuration="4.875330829s" podCreationTimestamp="2026-01-28 18:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:30.874501293 +0000 UTC m=+1236.599806873" watchObservedRunningTime="2026-01-28 18:54:30.875330829 +0000 UTC m=+1236.600636389" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.237778 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:54:31 crc kubenswrapper[4721]: E0128 18:54:31.238830 4721 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:54:31 crc kubenswrapper[4721]: E0128 18:54:31.238861 4721 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:54:31 crc kubenswrapper[4721]: E0128 18:54:31.238930 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift podName:aa657a81-842e-4292-a71e-e208b4c0bd69 nodeName:}" failed. No retries permitted until 2026-01-28 18:55:03.238906855 +0000 UTC m=+1268.964212415 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift") pod "swift-storage-0" (UID: "aa657a81-842e-4292-a71e-e208b4c0bd69") : configmap "swift-ring-files" not found Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.357364 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.442075 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af20b569-c763-4033-8b7b-df1ce95dcba2-operator-scripts\") pod \"af20b569-c763-4033-8b7b-df1ce95dcba2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.442440 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr5tt\" (UniqueName: \"kubernetes.io/projected/af20b569-c763-4033-8b7b-df1ce95dcba2-kube-api-access-xr5tt\") pod \"af20b569-c763-4033-8b7b-df1ce95dcba2\" (UID: \"af20b569-c763-4033-8b7b-df1ce95dcba2\") " Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.442983 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af20b569-c763-4033-8b7b-df1ce95dcba2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af20b569-c763-4033-8b7b-df1ce95dcba2" (UID: "af20b569-c763-4033-8b7b-df1ce95dcba2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.443282 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af20b569-c763-4033-8b7b-df1ce95dcba2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.451479 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af20b569-c763-4033-8b7b-df1ce95dcba2-kube-api-access-xr5tt" (OuterVolumeSpecName: "kube-api-access-xr5tt") pod "af20b569-c763-4033-8b7b-df1ce95dcba2" (UID: "af20b569-c763-4033-8b7b-df1ce95dcba2"). InnerVolumeSpecName "kube-api-access-xr5tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.545629 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr5tt\" (UniqueName: \"kubernetes.io/projected/af20b569-c763-4033-8b7b-df1ce95dcba2-kube-api-access-xr5tt\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.726517 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sbclw" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.760144 4721 generic.go:334] "Generic (PLEG): container finished" podID="5bea9fca-3e0f-4158-ba76-aa184abd2d4c" containerID="d90feaace03cfa5caa58fbf53981257232daa585103c8a7a6929b4e7b58b3581" exitCode=0 Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.760372 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-feb7-account-create-update-hztgg" event={"ID":"5bea9fca-3e0f-4158-ba76-aa184abd2d4c","Type":"ContainerDied","Data":"d90feaace03cfa5caa58fbf53981257232daa585103c8a7a6929b4e7b58b3581"} Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.769377 4721 generic.go:334] "Generic (PLEG): container finished" podID="eba9db5f-dcb9-460b-abdd-144249ee3c13" containerID="06307089ab5efe0f0f5f4ca6a469540d89bf820eb634359963970b1808cd407e" exitCode=0 Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.769499 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lbv9r" event={"ID":"eba9db5f-dcb9-460b-abdd-144249ee3c13","Type":"ContainerDied","Data":"06307089ab5efe0f0f5f4ca6a469540d89bf820eb634359963970b1808cd407e"} Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.772051 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kr7q2" event={"ID":"af20b569-c763-4033-8b7b-df1ce95dcba2","Type":"ContainerDied","Data":"029637a173a312dfb09543d1feb4b5a32fde97d125386751707b9ab4b4d6df69"} Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.772105 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029637a173a312dfb09543d1feb4b5a32fde97d125386751707b9ab4b4d6df69" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.772313 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kr7q2" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.778344 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"95a1b67a-adb0-42f1-9fb8-32b01c443ede","Type":"ContainerStarted","Data":"d0aa02161cf7153e3f90f60bd45230cbf6d899c0cbcfdd64572ac7f49c6d6825"} Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.779309 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.781972 4721 generic.go:334] "Generic (PLEG): container finished" podID="75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" containerID="e49939efe79b61e57415a7d5e53c0f9cdab0563733da9d4be04b586be0385837" exitCode=0 Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.782426 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9597-account-create-update-7bj94" event={"ID":"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0","Type":"ContainerDied","Data":"e49939efe79b61e57415a7d5e53c0f9cdab0563733da9d4be04b586be0385837"} Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.784524 4721 generic.go:334] "Generic (PLEG): container finished" podID="8bd7cf35-9697-4283-9f57-a7ed6f9311e6" containerID="cfc627ad0fc78c84a9e728d559b682afa1e87bec17599343ae60c1a5843ca673" exitCode=0 Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.784788 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sbclw-config-48kml" event={"ID":"8bd7cf35-9697-4283-9f57-a7ed6f9311e6","Type":"ContainerDied","Data":"cfc627ad0fc78c84a9e728d559b682afa1e87bec17599343ae60c1a5843ca673"} Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.792804 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 28 18:54:31 crc kubenswrapper[4721]: I0128 18:54:31.949153 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=39.847222597 podStartE2EDuration="1m23.949124503s" podCreationTimestamp="2026-01-28 18:53:08 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.763976554 +0000 UTC m=+1186.489282114" lastFinishedPulling="2026-01-28 18:54:24.86587846 +0000 UTC m=+1230.591184020" observedRunningTime="2026-01-28 18:54:31.871585379 +0000 UTC m=+1237.596890939" watchObservedRunningTime="2026-01-28 18:54:31.949124503 +0000 UTC m=+1237.674430063" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.388063 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.396686 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.443402 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.465279 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lvd\" (UniqueName: \"kubernetes.io/projected/80f5f923-3ee7-4416-bba4-03d51578c8c4-kube-api-access-w6lvd\") pod \"80f5f923-3ee7-4416-bba4-03d51578c8c4\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.465377 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f959669-d607-4e65-9b7a-50f0a5d73c6a-operator-scripts\") pod \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.465485 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slwtr\" (UniqueName: \"kubernetes.io/projected/4f959669-d607-4e65-9b7a-50f0a5d73c6a-kube-api-access-slwtr\") pod \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\" (UID: \"4f959669-d607-4e65-9b7a-50f0a5d73c6a\") " Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.465594 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f5f923-3ee7-4416-bba4-03d51578c8c4-operator-scripts\") pod \"80f5f923-3ee7-4416-bba4-03d51578c8c4\" (UID: \"80f5f923-3ee7-4416-bba4-03d51578c8c4\") " Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.467219 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f5f923-3ee7-4416-bba4-03d51578c8c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80f5f923-3ee7-4416-bba4-03d51578c8c4" (UID: "80f5f923-3ee7-4416-bba4-03d51578c8c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.467300 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f959669-d607-4e65-9b7a-50f0a5d73c6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f959669-d607-4e65-9b7a-50f0a5d73c6a" (UID: "4f959669-d607-4e65-9b7a-50f0a5d73c6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.474553 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f959669-d607-4e65-9b7a-50f0a5d73c6a-kube-api-access-slwtr" (OuterVolumeSpecName: "kube-api-access-slwtr") pod "4f959669-d607-4e65-9b7a-50f0a5d73c6a" (UID: "4f959669-d607-4e65-9b7a-50f0a5d73c6a"). InnerVolumeSpecName "kube-api-access-slwtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.481582 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f5f923-3ee7-4416-bba4-03d51578c8c4-kube-api-access-w6lvd" (OuterVolumeSpecName: "kube-api-access-w6lvd") pod "80f5f923-3ee7-4416-bba4-03d51578c8c4" (UID: "80f5f923-3ee7-4416-bba4-03d51578c8c4"). InnerVolumeSpecName "kube-api-access-w6lvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.568019 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lvd\" (UniqueName: \"kubernetes.io/projected/80f5f923-3ee7-4416-bba4-03d51578c8c4-kube-api-access-w6lvd\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.568449 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f959669-d607-4e65-9b7a-50f0a5d73c6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.568467 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slwtr\" (UniqueName: \"kubernetes.io/projected/4f959669-d607-4e65-9b7a-50f0a5d73c6a-kube-api-access-slwtr\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.568478 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80f5f923-3ee7-4416-bba4-03d51578c8c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.799450 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57fd-account-create-update-g9drk" event={"ID":"4f959669-d607-4e65-9b7a-50f0a5d73c6a","Type":"ContainerDied","Data":"8088b8e91e24f6fc79a04dc0f3b4feebeb6bf4affbc10c64223d86b5dc5db14a"} Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.799500 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8088b8e91e24f6fc79a04dc0f3b4feebeb6bf4affbc10c64223d86b5dc5db14a" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.799558 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57fd-account-create-update-g9drk" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.801677 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gm24k" Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.805162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gm24k" event={"ID":"80f5f923-3ee7-4416-bba4-03d51578c8c4","Type":"ContainerDied","Data":"bd63b9018df23448b7fe9500c6fea51cac0abe6ecc043e9c94ea9d136f066fdb"} Jan 28 18:54:32 crc kubenswrapper[4721]: I0128 18:54:32.805240 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd63b9018df23448b7fe9500c6fea51cac0abe6ecc043e9c94ea9d136f066fdb" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036074 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-944t9"] Jan 28 18:54:33 crc kubenswrapper[4721]: E0128 18:54:33.036500 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af20b569-c763-4033-8b7b-df1ce95dcba2" containerName="mariadb-database-create" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036513 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="af20b569-c763-4033-8b7b-df1ce95dcba2" containerName="mariadb-database-create" Jan 28 18:54:33 crc kubenswrapper[4721]: E0128 18:54:33.036532 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f959669-d607-4e65-9b7a-50f0a5d73c6a" containerName="mariadb-account-create-update" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036538 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f959669-d607-4e65-9b7a-50f0a5d73c6a" containerName="mariadb-account-create-update" Jan 28 18:54:33 crc kubenswrapper[4721]: E0128 18:54:33.036562 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f5f923-3ee7-4416-bba4-03d51578c8c4" containerName="mariadb-database-create" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036568 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f5f923-3ee7-4416-bba4-03d51578c8c4" containerName="mariadb-database-create" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036766 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f959669-d607-4e65-9b7a-50f0a5d73c6a" containerName="mariadb-account-create-update" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036783 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="af20b569-c763-4033-8b7b-df1ce95dcba2" containerName="mariadb-database-create" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.036794 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f5f923-3ee7-4416-bba4-03d51578c8c4" containerName="mariadb-database-create" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.037491 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.040567 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.065082 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-944t9"] Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.081001 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-operator-scripts\") pod \"root-account-create-update-944t9\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.081143 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7wx\" (UniqueName: \"kubernetes.io/projected/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-kube-api-access-8s7wx\") pod \"root-account-create-update-944t9\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.183524 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-operator-scripts\") pod \"root-account-create-update-944t9\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.183720 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7wx\" (UniqueName: \"kubernetes.io/projected/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-kube-api-access-8s7wx\") pod \"root-account-create-update-944t9\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.184973 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-operator-scripts\") pod \"root-account-create-update-944t9\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.222788 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7wx\" (UniqueName: \"kubernetes.io/projected/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-kube-api-access-8s7wx\") pod \"root-account-create-update-944t9\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.296830 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.328125 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.361109 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-944t9" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.398105 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba9db5f-dcb9-460b-abdd-144249ee3c13-operator-scripts\") pod \"eba9db5f-dcb9-460b-abdd-144249ee3c13\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.398262 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p945n\" (UniqueName: \"kubernetes.io/projected/eba9db5f-dcb9-460b-abdd-144249ee3c13-kube-api-access-p945n\") pod \"eba9db5f-dcb9-460b-abdd-144249ee3c13\" (UID: \"eba9db5f-dcb9-460b-abdd-144249ee3c13\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.407796 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba9db5f-dcb9-460b-abdd-144249ee3c13-kube-api-access-p945n" (OuterVolumeSpecName: "kube-api-access-p945n") pod "eba9db5f-dcb9-460b-abdd-144249ee3c13" (UID: "eba9db5f-dcb9-460b-abdd-144249ee3c13"). InnerVolumeSpecName "kube-api-access-p945n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.427446 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba9db5f-dcb9-460b-abdd-144249ee3c13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eba9db5f-dcb9-460b-abdd-144249ee3c13" (UID: "eba9db5f-dcb9-460b-abdd-144249ee3c13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.445246 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tsjl9"] Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.445647 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="dnsmasq-dns" containerID="cri-o://bc7e174e68ffbb135ab09e3a6e5fb466556062db0ef3a8bf819fd716ee75696a" gracePeriod=10 Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.501879 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p945n\" (UniqueName: \"kubernetes.io/projected/eba9db5f-dcb9-460b-abdd-144249ee3c13-kube-api-access-p945n\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.501925 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eba9db5f-dcb9-460b-abdd-144249ee3c13-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.565001 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.607786 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.614893 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711225 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run\") pod \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711367 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run" (OuterVolumeSpecName: "var-run") pod "8bd7cf35-9697-4283-9f57-a7ed6f9311e6" (UID: "8bd7cf35-9697-4283-9f57-a7ed6f9311e6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711400 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-additional-scripts\") pod \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711465 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxzlb\" (UniqueName: \"kubernetes.io/projected/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-kube-api-access-hxzlb\") pod \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711597 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run-ovn\") pod \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711626 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrzbg\" (UniqueName: \"kubernetes.io/projected/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-kube-api-access-rrzbg\") pod \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711673 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-operator-scripts\") pod \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711705 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-scripts\") pod \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711761 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtt2h\" (UniqueName: \"kubernetes.io/projected/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-kube-api-access-rtt2h\") pod \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\" (UID: \"5bea9fca-3e0f-4158-ba76-aa184abd2d4c\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.711943 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-operator-scripts\") pod \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\" (UID: \"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.712083 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-log-ovn\") pod \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\" (UID: \"8bd7cf35-9697-4283-9f57-a7ed6f9311e6\") " Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.712493 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8bd7cf35-9697-4283-9f57-a7ed6f9311e6" (UID: "8bd7cf35-9697-4283-9f57-a7ed6f9311e6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.712583 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8bd7cf35-9697-4283-9f57-a7ed6f9311e6" (UID: "8bd7cf35-9697-4283-9f57-a7ed6f9311e6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.712808 4721 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.712825 4721 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.712836 4721 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.713431 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" (UID: "75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.713483 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8bd7cf35-9697-4283-9f57-a7ed6f9311e6" (UID: "8bd7cf35-9697-4283-9f57-a7ed6f9311e6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.713572 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bea9fca-3e0f-4158-ba76-aa184abd2d4c" (UID: "5bea9fca-3e0f-4158-ba76-aa184abd2d4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.713756 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-scripts" (OuterVolumeSpecName: "scripts") pod "8bd7cf35-9697-4283-9f57-a7ed6f9311e6" (UID: "8bd7cf35-9697-4283-9f57-a7ed6f9311e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.719118 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-kube-api-access-rrzbg" (OuterVolumeSpecName: "kube-api-access-rrzbg") pod "8bd7cf35-9697-4283-9f57-a7ed6f9311e6" (UID: "8bd7cf35-9697-4283-9f57-a7ed6f9311e6"). InnerVolumeSpecName "kube-api-access-rrzbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.719438 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-kube-api-access-hxzlb" (OuterVolumeSpecName: "kube-api-access-hxzlb") pod "75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" (UID: "75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0"). InnerVolumeSpecName "kube-api-access-hxzlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.721882 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-kube-api-access-rtt2h" (OuterVolumeSpecName: "kube-api-access-rtt2h") pod "5bea9fca-3e0f-4158-ba76-aa184abd2d4c" (UID: "5bea9fca-3e0f-4158-ba76-aa184abd2d4c"). InnerVolumeSpecName "kube-api-access-rtt2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817459 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxzlb\" (UniqueName: \"kubernetes.io/projected/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-kube-api-access-hxzlb\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817509 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrzbg\" (UniqueName: \"kubernetes.io/projected/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-kube-api-access-rrzbg\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817526 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817538 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817552 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtt2h\" (UniqueName: \"kubernetes.io/projected/5bea9fca-3e0f-4158-ba76-aa184abd2d4c-kube-api-access-rtt2h\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817563 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.817580 4721 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd7cf35-9697-4283-9f57-a7ed6f9311e6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.822894 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sbclw-config-48kml" event={"ID":"8bd7cf35-9697-4283-9f57-a7ed6f9311e6","Type":"ContainerDied","Data":"69191ff4d12c50cb29dd3d36bedd882c7f4237dd0947319af69db39fb1da9b88"} Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.822939 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69191ff4d12c50cb29dd3d36bedd882c7f4237dd0947319af69db39fb1da9b88" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.823006 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sbclw-config-48kml" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.828885 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-feb7-account-create-update-hztgg" event={"ID":"5bea9fca-3e0f-4158-ba76-aa184abd2d4c","Type":"ContainerDied","Data":"3beca7410ba245b3dcf5c7bcfdc7aa5cd28166c1cc6a91a29c567ddc4c982cea"} Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.828924 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3beca7410ba245b3dcf5c7bcfdc7aa5cd28166c1cc6a91a29c567ddc4c982cea" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.828980 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-feb7-account-create-update-hztgg" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.851867 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lbv9r" event={"ID":"eba9db5f-dcb9-460b-abdd-144249ee3c13","Type":"ContainerDied","Data":"5b724888183e52abca8df0535a830bd3b054b46471f344aa692c047e3290f3bf"} Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.851943 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b724888183e52abca8df0535a830bd3b054b46471f344aa692c047e3290f3bf" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.852154 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lbv9r" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.861704 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9597-account-create-update-7bj94" Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.862007 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9597-account-create-update-7bj94" event={"ID":"75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0","Type":"ContainerDied","Data":"bccddefc8eece815f05991a96b7da8c7a1f6a279c1db7387d2375ef54f7dbbcb"} Jan 28 18:54:33 crc kubenswrapper[4721]: I0128 18:54:33.862055 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bccddefc8eece815f05991a96b7da8c7a1f6a279c1db7387d2375ef54f7dbbcb" Jan 28 18:54:34 crc kubenswrapper[4721]: I0128 18:54:34.014238 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sbclw-config-48kml"] Jan 28 18:54:34 crc kubenswrapper[4721]: I0128 18:54:34.031128 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sbclw-config-48kml"] Jan 28 18:54:34 crc kubenswrapper[4721]: I0128 18:54:34.137915 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-944t9"] Jan 28 18:54:34 crc kubenswrapper[4721]: W0128 18:54:34.145842 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d4a6a6_8702_48ce_92a5_ddaee1395c4d.slice/crio-fa6ae3c4adec6786004f2e98baa2c975feccd8b6feee3e18e4b79843eea6bb79 WatchSource:0}: Error finding container fa6ae3c4adec6786004f2e98baa2c975feccd8b6feee3e18e4b79843eea6bb79: Status 404 returned error can't find the container with id fa6ae3c4adec6786004f2e98baa2c975feccd8b6feee3e18e4b79843eea6bb79 Jan 28 18:54:34 crc kubenswrapper[4721]: I0128 18:54:34.881351 4721 generic.go:334] "Generic (PLEG): container finished" podID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerID="bc7e174e68ffbb135ab09e3a6e5fb466556062db0ef3a8bf819fd716ee75696a" exitCode=0 Jan 28 18:54:34 crc kubenswrapper[4721]: I0128 18:54:34.881416 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" event={"ID":"f0d97192-cb28-436d-adc6-a3aafd8aad46","Type":"ContainerDied","Data":"bc7e174e68ffbb135ab09e3a6e5fb466556062db0ef3a8bf819fd716ee75696a"} Jan 28 18:54:34 crc kubenswrapper[4721]: I0128 18:54:34.884071 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-944t9" event={"ID":"22d4a6a6-8702-48ce-92a5-ddaee1395c4d","Type":"ContainerStarted","Data":"fa6ae3c4adec6786004f2e98baa2c975feccd8b6feee3e18e4b79843eea6bb79"} Jan 28 18:54:35 crc kubenswrapper[4721]: I0128 18:54:35.542049 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd7cf35-9697-4283-9f57-a7ed6f9311e6" path="/var/lib/kubelet/pods/8bd7cf35-9697-4283-9f57-a7ed6f9311e6/volumes" Jan 28 18:54:35 crc kubenswrapper[4721]: I0128 18:54:35.894758 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-944t9" event={"ID":"22d4a6a6-8702-48ce-92a5-ddaee1395c4d","Type":"ContainerStarted","Data":"a4fe84c6a4aa9a1c38dc456aea3839d4b65cef37f826cd761a73edfe11338e19"} Jan 28 18:54:35 crc kubenswrapper[4721]: I0128 18:54:35.911967 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-944t9" podStartSLOduration=2.9119380120000002 podStartE2EDuration="2.911938012s" podCreationTimestamp="2026-01-28 18:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:35.910810737 +0000 UTC m=+1241.636116317" watchObservedRunningTime="2026-01-28 18:54:35.911938012 +0000 UTC m=+1241.637243562" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.606639 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-j284c"] Jan 28 18:54:36 crc kubenswrapper[4721]: E0128 18:54:36.607600 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba9db5f-dcb9-460b-abdd-144249ee3c13" containerName="mariadb-database-create" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.607742 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba9db5f-dcb9-460b-abdd-144249ee3c13" containerName="mariadb-database-create" Jan 28 18:54:36 crc kubenswrapper[4721]: E0128 18:54:36.607870 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bea9fca-3e0f-4158-ba76-aa184abd2d4c" containerName="mariadb-account-create-update" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.607988 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bea9fca-3e0f-4158-ba76-aa184abd2d4c" containerName="mariadb-account-create-update" Jan 28 18:54:36 crc kubenswrapper[4721]: E0128 18:54:36.608148 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" containerName="mariadb-account-create-update" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.608316 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" containerName="mariadb-account-create-update" Jan 28 18:54:36 crc kubenswrapper[4721]: E0128 18:54:36.608479 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd7cf35-9697-4283-9f57-a7ed6f9311e6" containerName="ovn-config" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.608573 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd7cf35-9697-4283-9f57-a7ed6f9311e6" containerName="ovn-config" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.608959 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba9db5f-dcb9-460b-abdd-144249ee3c13" containerName="mariadb-database-create" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.609098 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bea9fca-3e0f-4158-ba76-aa184abd2d4c" containerName="mariadb-account-create-update" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.609271 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd7cf35-9697-4283-9f57-a7ed6f9311e6" containerName="ovn-config" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.609445 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" containerName="mariadb-account-create-update" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.610500 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.615108 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dfbkx" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.616321 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.627673 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j284c"] Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.683996 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-config-data\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.684205 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-db-sync-config-data\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.684372 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxl8v\" (UniqueName: \"kubernetes.io/projected/7b2b2524-50e6-4d73-bdb9-8770b642481e-kube-api-access-mxl8v\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.684446 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-combined-ca-bundle\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.787184 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-config-data\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.787630 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-db-sync-config-data\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.787835 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxl8v\" (UniqueName: \"kubernetes.io/projected/7b2b2524-50e6-4d73-bdb9-8770b642481e-kube-api-access-mxl8v\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.787905 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-combined-ca-bundle\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.796916 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-combined-ca-bundle\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.804032 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-config-data\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.806623 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-db-sync-config-data\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.810320 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxl8v\" (UniqueName: \"kubernetes.io/projected/7b2b2524-50e6-4d73-bdb9-8770b642481e-kube-api-access-mxl8v\") pod \"glance-db-sync-j284c\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " pod="openstack/glance-db-sync-j284c" Jan 28 18:54:36 crc kubenswrapper[4721]: I0128 18:54:36.936413 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j284c" Jan 28 18:54:37 crc kubenswrapper[4721]: I0128 18:54:37.867447 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.417549 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.524261 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-config\") pod \"f0d97192-cb28-436d-adc6-a3aafd8aad46\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.526335 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-sb\") pod \"f0d97192-cb28-436d-adc6-a3aafd8aad46\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.526466 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd5c7\" (UniqueName: \"kubernetes.io/projected/f0d97192-cb28-436d-adc6-a3aafd8aad46-kube-api-access-fd5c7\") pod \"f0d97192-cb28-436d-adc6-a3aafd8aad46\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.526536 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-nb\") pod \"f0d97192-cb28-436d-adc6-a3aafd8aad46\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.526568 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-dns-svc\") pod \"f0d97192-cb28-436d-adc6-a3aafd8aad46\" (UID: \"f0d97192-cb28-436d-adc6-a3aafd8aad46\") " Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.530670 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d97192-cb28-436d-adc6-a3aafd8aad46-kube-api-access-fd5c7" (OuterVolumeSpecName: "kube-api-access-fd5c7") pod "f0d97192-cb28-436d-adc6-a3aafd8aad46" (UID: "f0d97192-cb28-436d-adc6-a3aafd8aad46"). InnerVolumeSpecName "kube-api-access-fd5c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.592465 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0d97192-cb28-436d-adc6-a3aafd8aad46" (UID: "f0d97192-cb28-436d-adc6-a3aafd8aad46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.592767 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0d97192-cb28-436d-adc6-a3aafd8aad46" (UID: "f0d97192-cb28-436d-adc6-a3aafd8aad46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.607265 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-config" (OuterVolumeSpecName: "config") pod "f0d97192-cb28-436d-adc6-a3aafd8aad46" (UID: "f0d97192-cb28-436d-adc6-a3aafd8aad46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.624872 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0d97192-cb28-436d-adc6-a3aafd8aad46" (UID: "f0d97192-cb28-436d-adc6-a3aafd8aad46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.631716 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd5c7\" (UniqueName: \"kubernetes.io/projected/f0d97192-cb28-436d-adc6-a3aafd8aad46-kube-api-access-fd5c7\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.631753 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.631764 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.631774 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:38 crc kubenswrapper[4721]: I0128 18:54:38.631782 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0d97192-cb28-436d-adc6-a3aafd8aad46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.926667 4721 generic.go:334] "Generic (PLEG): container finished" podID="22d4a6a6-8702-48ce-92a5-ddaee1395c4d" containerID="a4fe84c6a4aa9a1c38dc456aea3839d4b65cef37f826cd761a73edfe11338e19" exitCode=0 Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.926742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-944t9" event={"ID":"22d4a6a6-8702-48ce-92a5-ddaee1395c4d","Type":"ContainerDied","Data":"a4fe84c6a4aa9a1c38dc456aea3839d4b65cef37f826cd761a73edfe11338e19"} Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.929871 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" event={"ID":"f0d97192-cb28-436d-adc6-a3aafd8aad46","Type":"ContainerDied","Data":"75c50c6d613af4923a937cf05fe5034952d45aa7b87e2a93cedcbc7e150e9ea3"} Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.929916 4721 scope.go:117] "RemoveContainer" containerID="bc7e174e68ffbb135ab09e3a6e5fb466556062db0ef3a8bf819fd716ee75696a" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.929951 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.951313 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="742e65f6-66eb-4334-9328-b77d47d420d0" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.969463 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tsjl9"] Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:38.984908 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tsjl9"] Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.045939 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j284c"] Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.054061 4721 scope.go:117] "RemoveContainer" containerID="a0f4e8b7b4c9005f03034e555cd6e1fee8a76df36f415f9caf9100ad3f1b839e" Jan 28 18:54:40 crc kubenswrapper[4721]: W0128 18:54:39.066274 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b2b2524_50e6_4d73_bdb9_8770b642481e.slice/crio-e95ccae744258def981dbab17f77366df750160a9e686762c1f8fe4eb373774c WatchSource:0}: Error finding container e95ccae744258def981dbab17f77366df750160a9e686762c1f8fe4eb373774c: Status 404 returned error can't find the container with id e95ccae744258def981dbab17f77366df750160a9e686762c1f8fe4eb373774c Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.122255 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.551081 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" path="/var/lib/kubelet/pods/f0d97192-cb28-436d-adc6-a3aafd8aad46/volumes" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.940049 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j284c" event={"ID":"7b2b2524-50e6-4d73-bdb9-8770b642481e","Type":"ContainerStarted","Data":"e95ccae744258def981dbab17f77366df750160a9e686762c1f8fe4eb373774c"} Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.943751 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerStarted","Data":"ef9e2874d303519c1d9d8f1228aa76c719e69f1ebbaff4752a1ee9bb9fc16828"} Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:39.999374 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=34.607333096 podStartE2EDuration="1m32.999352169s" podCreationTimestamp="2026-01-28 18:53:07 +0000 UTC" firstStartedPulling="2026-01-28 18:53:40.669945331 +0000 UTC m=+1186.395250891" lastFinishedPulling="2026-01-28 18:54:39.061964404 +0000 UTC m=+1244.787269964" observedRunningTime="2026-01-28 18:54:39.99653933 +0000 UTC m=+1245.721844900" watchObservedRunningTime="2026-01-28 18:54:39.999352169 +0000 UTC m=+1245.724657729" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.752296 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-944t9" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.894537 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-operator-scripts\") pod \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.894739 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s7wx\" (UniqueName: \"kubernetes.io/projected/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-kube-api-access-8s7wx\") pod \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\" (UID: \"22d4a6a6-8702-48ce-92a5-ddaee1395c4d\") " Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.895240 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22d4a6a6-8702-48ce-92a5-ddaee1395c4d" (UID: "22d4a6a6-8702-48ce-92a5-ddaee1395c4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.895404 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.911967 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-kube-api-access-8s7wx" (OuterVolumeSpecName: "kube-api-access-8s7wx") pod "22d4a6a6-8702-48ce-92a5-ddaee1395c4d" (UID: "22d4a6a6-8702-48ce-92a5-ddaee1395c4d"). InnerVolumeSpecName "kube-api-access-8s7wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.959473 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-944t9" event={"ID":"22d4a6a6-8702-48ce-92a5-ddaee1395c4d","Type":"ContainerDied","Data":"fa6ae3c4adec6786004f2e98baa2c975feccd8b6feee3e18e4b79843eea6bb79"} Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.959538 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6ae3c4adec6786004f2e98baa2c975feccd8b6feee3e18e4b79843eea6bb79" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.959773 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-944t9" Jan 28 18:54:40 crc kubenswrapper[4721]: I0128 18:54:40.997383 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s7wx\" (UniqueName: \"kubernetes.io/projected/22d4a6a6-8702-48ce-92a5-ddaee1395c4d-kube-api-access-8s7wx\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:41 crc kubenswrapper[4721]: I0128 18:54:41.974252 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7bhzw" event={"ID":"d06bcf83-999f-419a-9f4f-4e6544576897","Type":"ContainerStarted","Data":"f8a010946fc4ca83ba26f0c6cbb1e4ecbcec583b97d5f5919401a331f6006efe"} Jan 28 18:54:41 crc kubenswrapper[4721]: I0128 18:54:41.998537 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7bhzw" podStartSLOduration=2.691182695 podStartE2EDuration="39.998509558s" podCreationTimestamp="2026-01-28 18:54:02 +0000 UTC" firstStartedPulling="2026-01-28 18:54:03.812659125 +0000 UTC m=+1209.537964685" lastFinishedPulling="2026-01-28 18:54:41.119985988 +0000 UTC m=+1246.845291548" observedRunningTime="2026-01-28 18:54:41.997889228 +0000 UTC m=+1247.723194798" watchObservedRunningTime="2026-01-28 18:54:41.998509558 +0000 UTC m=+1247.723815118" Jan 28 18:54:42 crc kubenswrapper[4721]: I0128 18:54:42.441282 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tsjl9" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Jan 28 18:54:42 crc kubenswrapper[4721]: I0128 18:54:42.967401 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.379526 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.394206 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2mm9f"] Jan 28 18:54:43 crc kubenswrapper[4721]: E0128 18:54:43.394665 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="init" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.394683 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="init" Jan 28 18:54:43 crc kubenswrapper[4721]: E0128 18:54:43.394712 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d4a6a6-8702-48ce-92a5-ddaee1395c4d" containerName="mariadb-account-create-update" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.394719 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d4a6a6-8702-48ce-92a5-ddaee1395c4d" containerName="mariadb-account-create-update" Jan 28 18:54:43 crc kubenswrapper[4721]: E0128 18:54:43.394740 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="dnsmasq-dns" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.394747 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="dnsmasq-dns" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.394910 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d4a6a6-8702-48ce-92a5-ddaee1395c4d" containerName="mariadb-account-create-update" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.394924 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d97192-cb28-436d-adc6-a3aafd8aad46" containerName="dnsmasq-dns" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.395758 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.427276 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2mm9f"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.463479 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vdp\" (UniqueName: \"kubernetes.io/projected/cb21066a-3041-482d-a9bc-1e630bca568a-kube-api-access-x7vdp\") pod \"barbican-db-create-2mm9f\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.463573 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb21066a-3041-482d-a9bc-1e630bca568a-operator-scripts\") pod \"barbican-db-create-2mm9f\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.548753 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-774d-account-create-update-gkltd"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.551158 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.556074 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.563994 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-774d-account-create-update-gkltd"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.571909 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vdp\" (UniqueName: \"kubernetes.io/projected/cb21066a-3041-482d-a9bc-1e630bca568a-kube-api-access-x7vdp\") pod \"barbican-db-create-2mm9f\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.571969 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb21066a-3041-482d-a9bc-1e630bca568a-operator-scripts\") pod \"barbican-db-create-2mm9f\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.572685 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb21066a-3041-482d-a9bc-1e630bca568a-operator-scripts\") pod \"barbican-db-create-2mm9f\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.616316 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vdp\" (UniqueName: \"kubernetes.io/projected/cb21066a-3041-482d-a9bc-1e630bca568a-kube-api-access-x7vdp\") pod \"barbican-db-create-2mm9f\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.673754 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639fd412-92fd-4dc3-bc89-c75178b7d83e-operator-scripts\") pod \"barbican-774d-account-create-update-gkltd\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.673800 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjzh\" (UniqueName: \"kubernetes.io/projected/639fd412-92fd-4dc3-bc89-c75178b7d83e-kube-api-access-sjjzh\") pod \"barbican-774d-account-create-update-gkltd\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.717825 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.719451 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-efa1-account-create-update-kvj9r"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.721819 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.737391 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.748440 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-efa1-account-create-update-kvj9r"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.777066 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-operator-scripts\") pod \"cinder-efa1-account-create-update-kvj9r\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.777155 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639fd412-92fd-4dc3-bc89-c75178b7d83e-operator-scripts\") pod \"barbican-774d-account-create-update-gkltd\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.777265 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjzh\" (UniqueName: \"kubernetes.io/projected/639fd412-92fd-4dc3-bc89-c75178b7d83e-kube-api-access-sjjzh\") pod \"barbican-774d-account-create-update-gkltd\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.777309 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9swm\" (UniqueName: \"kubernetes.io/projected/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-kube-api-access-t9swm\") pod \"cinder-efa1-account-create-update-kvj9r\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.801558 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639fd412-92fd-4dc3-bc89-c75178b7d83e-operator-scripts\") pod \"barbican-774d-account-create-update-gkltd\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.805999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjzh\" (UniqueName: \"kubernetes.io/projected/639fd412-92fd-4dc3-bc89-c75178b7d83e-kube-api-access-sjjzh\") pod \"barbican-774d-account-create-update-gkltd\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.840894 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7tqqv"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.842830 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.879919 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-operator-scripts\") pod \"cinder-efa1-account-create-update-kvj9r\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.880256 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9swm\" (UniqueName: \"kubernetes.io/projected/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-kube-api-access-t9swm\") pod \"cinder-efa1-account-create-update-kvj9r\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.882244 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-operator-scripts\") pod \"cinder-efa1-account-create-update-kvj9r\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.882313 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7tqqv"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.888651 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.928773 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wppp8"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.930839 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.940734 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.942440 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9swm\" (UniqueName: \"kubernetes.io/projected/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-kube-api-access-t9swm\") pod \"cinder-efa1-account-create-update-kvj9r\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.942652 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.942709 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gfv9p" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.943577 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.976715 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wppp8"] Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.982566 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebde931-976a-4436-a92f-a5a5d44fdc11-operator-scripts\") pod \"cinder-db-create-7tqqv\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.982726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-combined-ca-bundle\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.982799 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-config-data\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.982892 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbb52\" (UniqueName: \"kubernetes.io/projected/06674c33-d387-4999-9e87-d72f80b98173-kube-api-access-dbb52\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:43 crc kubenswrapper[4721]: I0128 18:54:43.982986 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwzn2\" (UniqueName: \"kubernetes.io/projected/4ebde931-976a-4436-a92f-a5a5d44fdc11-kube-api-access-kwzn2\") pod \"cinder-db-create-7tqqv\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.008426 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-62wbn"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.010323 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.062389 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-62wbn"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.089614 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwzn2\" (UniqueName: \"kubernetes.io/projected/4ebde931-976a-4436-a92f-a5a5d44fdc11-kube-api-access-kwzn2\") pod \"cinder-db-create-7tqqv\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.089697 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkch\" (UniqueName: \"kubernetes.io/projected/d93a26f9-04bd-4215-a6fb-230626a1e376-kube-api-access-skkch\") pod \"neutron-db-create-62wbn\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.089745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebde931-976a-4436-a92f-a5a5d44fdc11-operator-scripts\") pod \"cinder-db-create-7tqqv\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.089817 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-combined-ca-bundle\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.089862 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-config-data\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.094827 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d93a26f9-04bd-4215-a6fb-230626a1e376-operator-scripts\") pod \"neutron-db-create-62wbn\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.094909 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbb52\" (UniqueName: \"kubernetes.io/projected/06674c33-d387-4999-9e87-d72f80b98173-kube-api-access-dbb52\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.095542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebde931-976a-4436-a92f-a5a5d44fdc11-operator-scripts\") pod \"cinder-db-create-7tqqv\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.107481 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-combined-ca-bundle\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.120395 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-config-data\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.132078 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwzn2\" (UniqueName: \"kubernetes.io/projected/4ebde931-976a-4436-a92f-a5a5d44fdc11-kube-api-access-kwzn2\") pod \"cinder-db-create-7tqqv\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.142999 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbb52\" (UniqueName: \"kubernetes.io/projected/06674c33-d387-4999-9e87-d72f80b98173-kube-api-access-dbb52\") pod \"keystone-db-sync-wppp8\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.147075 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-hs5gk"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.148793 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.197810 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0104f7d3-7be4-411b-8ca7-89c72b31b43d-operator-scripts\") pod \"cloudkitty-db-create-hs5gk\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.197914 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d93a26f9-04bd-4215-a6fb-230626a1e376-operator-scripts\") pod \"neutron-db-create-62wbn\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.198047 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skkch\" (UniqueName: \"kubernetes.io/projected/d93a26f9-04bd-4215-a6fb-230626a1e376-kube-api-access-skkch\") pod \"neutron-db-create-62wbn\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.198130 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfgr\" (UniqueName: \"kubernetes.io/projected/0104f7d3-7be4-411b-8ca7-89c72b31b43d-kube-api-access-4cfgr\") pod \"cloudkitty-db-create-hs5gk\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.201598 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d93a26f9-04bd-4215-a6fb-230626a1e376-operator-scripts\") pod \"neutron-db-create-62wbn\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.202386 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-hs5gk"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.204824 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.228314 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.233289 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-869a-account-create-update-ndnwg"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.244534 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.249689 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.256358 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skkch\" (UniqueName: \"kubernetes.io/projected/d93a26f9-04bd-4215-a6fb-230626a1e376-kube-api-access-skkch\") pod \"neutron-db-create-62wbn\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.258991 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.301365 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42933b28-6c8f-4536-9be5-69b88a0d1390-operator-scripts\") pod \"neutron-869a-account-create-update-ndnwg\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.303495 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfgr\" (UniqueName: \"kubernetes.io/projected/0104f7d3-7be4-411b-8ca7-89c72b31b43d-kube-api-access-4cfgr\") pod \"cloudkitty-db-create-hs5gk\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.303614 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pln9\" (UniqueName: \"kubernetes.io/projected/42933b28-6c8f-4536-9be5-69b88a0d1390-kube-api-access-7pln9\") pod \"neutron-869a-account-create-update-ndnwg\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.307527 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0104f7d3-7be4-411b-8ca7-89c72b31b43d-operator-scripts\") pod \"cloudkitty-db-create-hs5gk\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.308748 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0104f7d3-7be4-411b-8ca7-89c72b31b43d-operator-scripts\") pod \"cloudkitty-db-create-hs5gk\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.349325 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfgr\" (UniqueName: \"kubernetes.io/projected/0104f7d3-7be4-411b-8ca7-89c72b31b43d-kube-api-access-4cfgr\") pod \"cloudkitty-db-create-hs5gk\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.365491 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-869a-account-create-update-ndnwg"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.390952 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-0f57-account-create-update-qgx95"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.393019 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.395711 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-0f57-account-create-update-qgx95"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.401002 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.410188 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngcr7\" (UniqueName: \"kubernetes.io/projected/cd381740-3e1d-456b-b5f1-e19f679513da-kube-api-access-ngcr7\") pod \"cloudkitty-0f57-account-create-update-qgx95\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.410346 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42933b28-6c8f-4536-9be5-69b88a0d1390-operator-scripts\") pod \"neutron-869a-account-create-update-ndnwg\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.410514 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd381740-3e1d-456b-b5f1-e19f679513da-operator-scripts\") pod \"cloudkitty-0f57-account-create-update-qgx95\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.410555 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pln9\" (UniqueName: \"kubernetes.io/projected/42933b28-6c8f-4536-9be5-69b88a0d1390-kube-api-access-7pln9\") pod \"neutron-869a-account-create-update-ndnwg\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.411836 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42933b28-6c8f-4536-9be5-69b88a0d1390-operator-scripts\") pod \"neutron-869a-account-create-update-ndnwg\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.429066 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wppp8" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.442565 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.455727 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pln9\" (UniqueName: \"kubernetes.io/projected/42933b28-6c8f-4536-9be5-69b88a0d1390-kube-api-access-7pln9\") pod \"neutron-869a-account-create-update-ndnwg\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.495506 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.503663 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-944t9"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.513060 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd381740-3e1d-456b-b5f1-e19f679513da-operator-scripts\") pod \"cloudkitty-0f57-account-create-update-qgx95\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.513214 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngcr7\" (UniqueName: \"kubernetes.io/projected/cd381740-3e1d-456b-b5f1-e19f679513da-kube-api-access-ngcr7\") pod \"cloudkitty-0f57-account-create-update-qgx95\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.513916 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-944t9"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.514620 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd381740-3e1d-456b-b5f1-e19f679513da-operator-scripts\") pod \"cloudkitty-0f57-account-create-update-qgx95\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.543959 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngcr7\" (UniqueName: \"kubernetes.io/projected/cd381740-3e1d-456b-b5f1-e19f679513da-kube-api-access-ngcr7\") pod \"cloudkitty-0f57-account-create-update-qgx95\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.577800 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.677604 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2mm9f"] Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.722029 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:44 crc kubenswrapper[4721]: I0128 18:54:44.845688 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-774d-account-create-update-gkltd"] Jan 28 18:54:44 crc kubenswrapper[4721]: W0128 18:54:44.940158 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod639fd412_92fd_4dc3_bc89_c75178b7d83e.slice/crio-52443d1b51adcabfb10c8fe7f939697d9cd1719f303980fc1a7861b26a13c3b3 WatchSource:0}: Error finding container 52443d1b51adcabfb10c8fe7f939697d9cd1719f303980fc1a7861b26a13c3b3: Status 404 returned error can't find the container with id 52443d1b51adcabfb10c8fe7f939697d9cd1719f303980fc1a7861b26a13c3b3 Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.108144 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2mm9f" event={"ID":"cb21066a-3041-482d-a9bc-1e630bca568a","Type":"ContainerStarted","Data":"38857dff6c66b1a54135b9a803dc3eeb995bad8abb09df20cc5be1d1c3daa5d1"} Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.111095 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-774d-account-create-update-gkltd" event={"ID":"639fd412-92fd-4dc3-bc89-c75178b7d83e","Type":"ContainerStarted","Data":"52443d1b51adcabfb10c8fe7f939697d9cd1719f303980fc1a7861b26a13c3b3"} Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.227666 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7tqqv"] Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.251517 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-efa1-account-create-update-kvj9r"] Jan 28 18:54:45 crc kubenswrapper[4721]: W0128 18:54:45.258959 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ebde931_976a_4436_a92f_a5a5d44fdc11.slice/crio-fd94c2d1cf0dc6262565413a1d7fa0e47602afcc179e6429a6e88ff9f73dff6c WatchSource:0}: Error finding container fd94c2d1cf0dc6262565413a1d7fa0e47602afcc179e6429a6e88ff9f73dff6c: Status 404 returned error can't find the container with id fd94c2d1cf0dc6262565413a1d7fa0e47602afcc179e6429a6e88ff9f73dff6c Jan 28 18:54:45 crc kubenswrapper[4721]: W0128 18:54:45.259875 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5de37217_5d22_4a9e_9e27_f1ed05b2d63e.slice/crio-70a8afd5a503d479076427edc620a497df76fb20f38c4bacdaae38b660cfdfb1 WatchSource:0}: Error finding container 70a8afd5a503d479076427edc620a497df76fb20f38c4bacdaae38b660cfdfb1: Status 404 returned error can't find the container with id 70a8afd5a503d479076427edc620a497df76fb20f38c4bacdaae38b660cfdfb1 Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.265909 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-62wbn"] Jan 28 18:54:45 crc kubenswrapper[4721]: W0128 18:54:45.266858 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd93a26f9_04bd_4215_a6fb_230626a1e376.slice/crio-26eedaa9609760f32ec7bb3d5fbcd63f466b075f084246212415f8e2d2d3e970 WatchSource:0}: Error finding container 26eedaa9609760f32ec7bb3d5fbcd63f466b075f084246212415f8e2d2d3e970: Status 404 returned error can't find the container with id 26eedaa9609760f32ec7bb3d5fbcd63f466b075f084246212415f8e2d2d3e970 Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.442461 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wppp8"] Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.542570 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-869a-account-create-update-ndnwg"] Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.587708 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d4a6a6-8702-48ce-92a5-ddaee1395c4d" path="/var/lib/kubelet/pods/22d4a6a6-8702-48ce-92a5-ddaee1395c4d/volumes" Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.608402 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-hs5gk"] Jan 28 18:54:45 crc kubenswrapper[4721]: E0128 18:54:45.760979 4721 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.66:60674->38.102.83.66:37489: write tcp 38.102.83.66:60674->38.102.83.66:37489: write: broken pipe Jan 28 18:54:45 crc kubenswrapper[4721]: I0128 18:54:45.761027 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-0f57-account-create-update-qgx95"] Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.134661 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7tqqv" event={"ID":"4ebde931-976a-4436-a92f-a5a5d44fdc11","Type":"ContainerStarted","Data":"578547330f0c76fc1a97308f98cc4ad2453a550a0535fd61e65ddb32941cf36a"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.134983 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7tqqv" event={"ID":"4ebde931-976a-4436-a92f-a5a5d44fdc11","Type":"ContainerStarted","Data":"fd94c2d1cf0dc6262565413a1d7fa0e47602afcc179e6429a6e88ff9f73dff6c"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.144215 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-774d-account-create-update-gkltd" event={"ID":"639fd412-92fd-4dc3-bc89-c75178b7d83e","Type":"ContainerStarted","Data":"835c4a8ecea505868024d79d258e3fc5477ba3ab2a7f824d022af4baa82da044"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.159912 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-62wbn" event={"ID":"d93a26f9-04bd-4215-a6fb-230626a1e376","Type":"ContainerStarted","Data":"142f6c127468dd0656340d9b6dc3a67d1a2a8ffc34f5655e80d62fda449184c9"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.159971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-62wbn" event={"ID":"d93a26f9-04bd-4215-a6fb-230626a1e376","Type":"ContainerStarted","Data":"26eedaa9609760f32ec7bb3d5fbcd63f466b075f084246212415f8e2d2d3e970"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.171278 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869a-account-create-update-ndnwg" event={"ID":"42933b28-6c8f-4536-9be5-69b88a0d1390","Type":"ContainerStarted","Data":"c7077e7c9e4bdf4e599efc5a76515d8694253c1594c6eb16efab278925b313bf"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.180450 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" event={"ID":"cd381740-3e1d-456b-b5f1-e19f679513da","Type":"ContainerStarted","Data":"925aab1ae8cb0560960678f0207ad696a04fa9018bbd29c34d265e83fa4b6c76"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.189276 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wppp8" event={"ID":"06674c33-d387-4999-9e87-d72f80b98173","Type":"ContainerStarted","Data":"b0295f47bcaf51526f9dae2f813797fefd6c572e0ded6454304ef1106eb01b9e"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.191356 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-hs5gk" event={"ID":"0104f7d3-7be4-411b-8ca7-89c72b31b43d","Type":"ContainerStarted","Data":"d44e08ef60ded79964316ab15eabcdf28b4c796a7cfa973db0744165e4162fa4"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.205536 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-774d-account-create-update-gkltd" podStartSLOduration=3.205509071 podStartE2EDuration="3.205509071s" podCreationTimestamp="2026-01-28 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:46.196953552 +0000 UTC m=+1251.922259112" watchObservedRunningTime="2026-01-28 18:54:46.205509071 +0000 UTC m=+1251.930814631" Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.207427 4721 generic.go:334] "Generic (PLEG): container finished" podID="cb21066a-3041-482d-a9bc-1e630bca568a" containerID="f8fe8f273067fd5aa26440f82d24220fb75b283b9e4944509f492473a1e565ec" exitCode=0 Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.207537 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2mm9f" event={"ID":"cb21066a-3041-482d-a9bc-1e630bca568a","Type":"ContainerDied","Data":"f8fe8f273067fd5aa26440f82d24220fb75b283b9e4944509f492473a1e565ec"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.218885 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-7tqqv" podStartSLOduration=3.218860173 podStartE2EDuration="3.218860173s" podCreationTimestamp="2026-01-28 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:46.172630876 +0000 UTC m=+1251.897936436" watchObservedRunningTime="2026-01-28 18:54:46.218860173 +0000 UTC m=+1251.944165733" Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.220362 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-efa1-account-create-update-kvj9r" event={"ID":"5de37217-5d22-4a9e-9e27-f1ed05b2d63e","Type":"ContainerStarted","Data":"9f20de7824405b55e7c6fecfaa65eb0693cfc52624e4ca92b0e8158a5fdeef9f"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.220401 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-efa1-account-create-update-kvj9r" event={"ID":"5de37217-5d22-4a9e-9e27-f1ed05b2d63e","Type":"ContainerStarted","Data":"70a8afd5a503d479076427edc620a497df76fb20f38c4bacdaae38b660cfdfb1"} Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.254942 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-869a-account-create-update-ndnwg" podStartSLOduration=2.254912529 podStartE2EDuration="2.254912529s" podCreationTimestamp="2026-01-28 18:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:46.232968176 +0000 UTC m=+1251.958273736" watchObservedRunningTime="2026-01-28 18:54:46.254912529 +0000 UTC m=+1251.980218089" Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.299939 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-62wbn" podStartSLOduration=3.299912926 podStartE2EDuration="3.299912926s" podCreationTimestamp="2026-01-28 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:46.26512226 +0000 UTC m=+1251.990427820" watchObservedRunningTime="2026-01-28 18:54:46.299912926 +0000 UTC m=+1252.025218486" Jan 28 18:54:46 crc kubenswrapper[4721]: I0128 18:54:46.386598 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-efa1-account-create-update-kvj9r" podStartSLOduration=3.386566607 podStartE2EDuration="3.386566607s" podCreationTimestamp="2026-01-28 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:54:46.313798114 +0000 UTC m=+1252.039103684" watchObservedRunningTime="2026-01-28 18:54:46.386566607 +0000 UTC m=+1252.111872167" Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.233203 4721 generic.go:334] "Generic (PLEG): container finished" podID="cd381740-3e1d-456b-b5f1-e19f679513da" containerID="80711faad5b6f4456cf84b27a0bbf117f4e397d1e6a89e93ec3e42813d189999" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.233268 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" event={"ID":"cd381740-3e1d-456b-b5f1-e19f679513da","Type":"ContainerDied","Data":"80711faad5b6f4456cf84b27a0bbf117f4e397d1e6a89e93ec3e42813d189999"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.236719 4721 generic.go:334] "Generic (PLEG): container finished" podID="0104f7d3-7be4-411b-8ca7-89c72b31b43d" containerID="1131293acabf7e063141eb04cb26bff8b8ff33f86e13d3617a9d84a5744a2a25" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.236813 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-hs5gk" event={"ID":"0104f7d3-7be4-411b-8ca7-89c72b31b43d","Type":"ContainerDied","Data":"1131293acabf7e063141eb04cb26bff8b8ff33f86e13d3617a9d84a5744a2a25"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.238876 4721 generic.go:334] "Generic (PLEG): container finished" podID="42933b28-6c8f-4536-9be5-69b88a0d1390" containerID="2dfc24c236e278cc4c79f641c3e0e465eebffb8e1ee210da08df670eec2a3c49" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.238948 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869a-account-create-update-ndnwg" event={"ID":"42933b28-6c8f-4536-9be5-69b88a0d1390","Type":"ContainerDied","Data":"2dfc24c236e278cc4c79f641c3e0e465eebffb8e1ee210da08df670eec2a3c49"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.242484 4721 generic.go:334] "Generic (PLEG): container finished" podID="5de37217-5d22-4a9e-9e27-f1ed05b2d63e" containerID="9f20de7824405b55e7c6fecfaa65eb0693cfc52624e4ca92b0e8158a5fdeef9f" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.242564 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-efa1-account-create-update-kvj9r" event={"ID":"5de37217-5d22-4a9e-9e27-f1ed05b2d63e","Type":"ContainerDied","Data":"9f20de7824405b55e7c6fecfaa65eb0693cfc52624e4ca92b0e8158a5fdeef9f"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.244505 4721 generic.go:334] "Generic (PLEG): container finished" podID="4ebde931-976a-4436-a92f-a5a5d44fdc11" containerID="578547330f0c76fc1a97308f98cc4ad2453a550a0535fd61e65ddb32941cf36a" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.244567 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7tqqv" event={"ID":"4ebde931-976a-4436-a92f-a5a5d44fdc11","Type":"ContainerDied","Data":"578547330f0c76fc1a97308f98cc4ad2453a550a0535fd61e65ddb32941cf36a"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.246313 4721 generic.go:334] "Generic (PLEG): container finished" podID="639fd412-92fd-4dc3-bc89-c75178b7d83e" containerID="835c4a8ecea505868024d79d258e3fc5477ba3ab2a7f824d022af4baa82da044" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.246400 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-774d-account-create-update-gkltd" event={"ID":"639fd412-92fd-4dc3-bc89-c75178b7d83e","Type":"ContainerDied","Data":"835c4a8ecea505868024d79d258e3fc5477ba3ab2a7f824d022af4baa82da044"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.247950 4721 generic.go:334] "Generic (PLEG): container finished" podID="d93a26f9-04bd-4215-a6fb-230626a1e376" containerID="142f6c127468dd0656340d9b6dc3a67d1a2a8ffc34f5655e80d62fda449184c9" exitCode=0 Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.248205 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-62wbn" event={"ID":"d93a26f9-04bd-4215-a6fb-230626a1e376","Type":"ContainerDied","Data":"142f6c127468dd0656340d9b6dc3a67d1a2a8ffc34f5655e80d62fda449184c9"} Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.720405 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.767271 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb21066a-3041-482d-a9bc-1e630bca568a-operator-scripts\") pod \"cb21066a-3041-482d-a9bc-1e630bca568a\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.767717 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7vdp\" (UniqueName: \"kubernetes.io/projected/cb21066a-3041-482d-a9bc-1e630bca568a-kube-api-access-x7vdp\") pod \"cb21066a-3041-482d-a9bc-1e630bca568a\" (UID: \"cb21066a-3041-482d-a9bc-1e630bca568a\") " Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.775935 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb21066a-3041-482d-a9bc-1e630bca568a-kube-api-access-x7vdp" (OuterVolumeSpecName: "kube-api-access-x7vdp") pod "cb21066a-3041-482d-a9bc-1e630bca568a" (UID: "cb21066a-3041-482d-a9bc-1e630bca568a"). InnerVolumeSpecName "kube-api-access-x7vdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.776041 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb21066a-3041-482d-a9bc-1e630bca568a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb21066a-3041-482d-a9bc-1e630bca568a" (UID: "cb21066a-3041-482d-a9bc-1e630bca568a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.870749 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7vdp\" (UniqueName: \"kubernetes.io/projected/cb21066a-3041-482d-a9bc-1e630bca568a-kube-api-access-x7vdp\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:47 crc kubenswrapper[4721]: I0128 18:54:47.870793 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb21066a-3041-482d-a9bc-1e630bca568a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.266188 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2mm9f" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.266206 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2mm9f" event={"ID":"cb21066a-3041-482d-a9bc-1e630bca568a","Type":"ContainerDied","Data":"38857dff6c66b1a54135b9a803dc3eeb995bad8abb09df20cc5be1d1c3daa5d1"} Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.266308 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38857dff6c66b1a54135b9a803dc3eeb995bad8abb09df20cc5be1d1c3daa5d1" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.743419 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.812189 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pln9\" (UniqueName: \"kubernetes.io/projected/42933b28-6c8f-4536-9be5-69b88a0d1390-kube-api-access-7pln9\") pod \"42933b28-6c8f-4536-9be5-69b88a0d1390\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.812449 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42933b28-6c8f-4536-9be5-69b88a0d1390-operator-scripts\") pod \"42933b28-6c8f-4536-9be5-69b88a0d1390\" (UID: \"42933b28-6c8f-4536-9be5-69b88a0d1390\") " Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.812902 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42933b28-6c8f-4536-9be5-69b88a0d1390-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42933b28-6c8f-4536-9be5-69b88a0d1390" (UID: "42933b28-6c8f-4536-9be5-69b88a0d1390"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.813821 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42933b28-6c8f-4536-9be5-69b88a0d1390-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.817082 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42933b28-6c8f-4536-9be5-69b88a0d1390-kube-api-access-7pln9" (OuterVolumeSpecName: "kube-api-access-7pln9") pod "42933b28-6c8f-4536-9be5-69b88a0d1390" (UID: "42933b28-6c8f-4536-9be5-69b88a0d1390"). InnerVolumeSpecName "kube-api-access-7pln9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.921614 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pln9\" (UniqueName: \"kubernetes.io/projected/42933b28-6c8f-4536-9be5-69b88a0d1390-kube-api-access-7pln9\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:48 crc kubenswrapper[4721]: I0128 18:54:48.946596 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.000471 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.010044 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.132215 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjzh\" (UniqueName: \"kubernetes.io/projected/639fd412-92fd-4dc3-bc89-c75178b7d83e-kube-api-access-sjjzh\") pod \"639fd412-92fd-4dc3-bc89-c75178b7d83e\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.132412 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639fd412-92fd-4dc3-bc89-c75178b7d83e-operator-scripts\") pod \"639fd412-92fd-4dc3-bc89-c75178b7d83e\" (UID: \"639fd412-92fd-4dc3-bc89-c75178b7d83e\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.132599 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d93a26f9-04bd-4215-a6fb-230626a1e376-operator-scripts\") pod \"d93a26f9-04bd-4215-a6fb-230626a1e376\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.132638 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skkch\" (UniqueName: \"kubernetes.io/projected/d93a26f9-04bd-4215-a6fb-230626a1e376-kube-api-access-skkch\") pod \"d93a26f9-04bd-4215-a6fb-230626a1e376\" (UID: \"d93a26f9-04bd-4215-a6fb-230626a1e376\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.140237 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639fd412-92fd-4dc3-bc89-c75178b7d83e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "639fd412-92fd-4dc3-bc89-c75178b7d83e" (UID: "639fd412-92fd-4dc3-bc89-c75178b7d83e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.140792 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93a26f9-04bd-4215-a6fb-230626a1e376-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d93a26f9-04bd-4215-a6fb-230626a1e376" (UID: "d93a26f9-04bd-4215-a6fb-230626a1e376"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.143830 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639fd412-92fd-4dc3-bc89-c75178b7d83e-kube-api-access-sjjzh" (OuterVolumeSpecName: "kube-api-access-sjjzh") pod "639fd412-92fd-4dc3-bc89-c75178b7d83e" (UID: "639fd412-92fd-4dc3-bc89-c75178b7d83e"). InnerVolumeSpecName "kube-api-access-sjjzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.143951 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93a26f9-04bd-4215-a6fb-230626a1e376-kube-api-access-skkch" (OuterVolumeSpecName: "kube-api-access-skkch") pod "d93a26f9-04bd-4215-a6fb-230626a1e376" (UID: "d93a26f9-04bd-4215-a6fb-230626a1e376"). InnerVolumeSpecName "kube-api-access-skkch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.244267 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/639fd412-92fd-4dc3-bc89-c75178b7d83e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.244332 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d93a26f9-04bd-4215-a6fb-230626a1e376-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.244350 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skkch\" (UniqueName: \"kubernetes.io/projected/d93a26f9-04bd-4215-a6fb-230626a1e376-kube-api-access-skkch\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.244366 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjjzh\" (UniqueName: \"kubernetes.io/projected/639fd412-92fd-4dc3-bc89-c75178b7d83e-kube-api-access-sjjzh\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.333886 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-774d-account-create-update-gkltd" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.334789 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-774d-account-create-update-gkltd" event={"ID":"639fd412-92fd-4dc3-bc89-c75178b7d83e","Type":"ContainerDied","Data":"52443d1b51adcabfb10c8fe7f939697d9cd1719f303980fc1a7861b26a13c3b3"} Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.334878 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52443d1b51adcabfb10c8fe7f939697d9cd1719f303980fc1a7861b26a13c3b3" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.345420 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-62wbn" event={"ID":"d93a26f9-04bd-4215-a6fb-230626a1e376","Type":"ContainerDied","Data":"26eedaa9609760f32ec7bb3d5fbcd63f466b075f084246212415f8e2d2d3e970"} Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.345467 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26eedaa9609760f32ec7bb3d5fbcd63f466b075f084246212415f8e2d2d3e970" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.345532 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-62wbn" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.353661 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869a-account-create-update-ndnwg" event={"ID":"42933b28-6c8f-4536-9be5-69b88a0d1390","Type":"ContainerDied","Data":"c7077e7c9e4bdf4e599efc5a76515d8694253c1594c6eb16efab278925b313bf"} Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.353710 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7077e7c9e4bdf4e599efc5a76515d8694253c1594c6eb16efab278925b313bf" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.353751 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869a-account-create-update-ndnwg" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.396502 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.402338 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.438134 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.477021 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.511686 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c9jld"] Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.512902 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd381740-3e1d-456b-b5f1-e19f679513da" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.513122 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd381740-3e1d-456b-b5f1-e19f679513da" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.513217 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93a26f9-04bd-4215-a6fb-230626a1e376" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.513291 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93a26f9-04bd-4215-a6fb-230626a1e376" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.513360 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42933b28-6c8f-4536-9be5-69b88a0d1390" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.513797 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="42933b28-6c8f-4536-9be5-69b88a0d1390" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.513972 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0104f7d3-7be4-411b-8ca7-89c72b31b43d" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.514114 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0104f7d3-7be4-411b-8ca7-89c72b31b43d" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.515112 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebde931-976a-4436-a92f-a5a5d44fdc11" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.515326 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebde931-976a-4436-a92f-a5a5d44fdc11" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.515509 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639fd412-92fd-4dc3-bc89-c75178b7d83e" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.515954 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="639fd412-92fd-4dc3-bc89-c75178b7d83e" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.516053 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de37217-5d22-4a9e-9e27-f1ed05b2d63e" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.516235 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de37217-5d22-4a9e-9e27-f1ed05b2d63e" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: E0128 18:54:49.516321 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb21066a-3041-482d-a9bc-1e630bca568a" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.516388 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb21066a-3041-482d-a9bc-1e630bca568a" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.516844 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="639fd412-92fd-4dc3-bc89-c75178b7d83e" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517007 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de37217-5d22-4a9e-9e27-f1ed05b2d63e" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517234 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93a26f9-04bd-4215-a6fb-230626a1e376" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517330 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0104f7d3-7be4-411b-8ca7-89c72b31b43d" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517412 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd381740-3e1d-456b-b5f1-e19f679513da" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517506 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebde931-976a-4436-a92f-a5a5d44fdc11" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517587 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="42933b28-6c8f-4536-9be5-69b88a0d1390" containerName="mariadb-account-create-update" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.517672 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb21066a-3041-482d-a9bc-1e630bca568a" containerName="mariadb-database-create" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.519415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.524981 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.551395 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cfgr\" (UniqueName: \"kubernetes.io/projected/0104f7d3-7be4-411b-8ca7-89c72b31b43d-kube-api-access-4cfgr\") pod \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.551472 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd381740-3e1d-456b-b5f1-e19f679513da-operator-scripts\") pod \"cd381740-3e1d-456b-b5f1-e19f679513da\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.551629 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-operator-scripts\") pod \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.551809 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9swm\" (UniqueName: \"kubernetes.io/projected/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-kube-api-access-t9swm\") pod \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\" (UID: \"5de37217-5d22-4a9e-9e27-f1ed05b2d63e\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.551925 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0104f7d3-7be4-411b-8ca7-89c72b31b43d-operator-scripts\") pod \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\" (UID: \"0104f7d3-7be4-411b-8ca7-89c72b31b43d\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.551960 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebde931-976a-4436-a92f-a5a5d44fdc11-operator-scripts\") pod \"4ebde931-976a-4436-a92f-a5a5d44fdc11\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.552020 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwzn2\" (UniqueName: \"kubernetes.io/projected/4ebde931-976a-4436-a92f-a5a5d44fdc11-kube-api-access-kwzn2\") pod \"4ebde931-976a-4436-a92f-a5a5d44fdc11\" (UID: \"4ebde931-976a-4436-a92f-a5a5d44fdc11\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.552061 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd381740-3e1d-456b-b5f1-e19f679513da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd381740-3e1d-456b-b5f1-e19f679513da" (UID: "cd381740-3e1d-456b-b5f1-e19f679513da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.552078 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngcr7\" (UniqueName: \"kubernetes.io/projected/cd381740-3e1d-456b-b5f1-e19f679513da-kube-api-access-ngcr7\") pod \"cd381740-3e1d-456b-b5f1-e19f679513da\" (UID: \"cd381740-3e1d-456b-b5f1-e19f679513da\") " Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.553355 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebde931-976a-4436-a92f-a5a5d44fdc11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ebde931-976a-4436-a92f-a5a5d44fdc11" (UID: "4ebde931-976a-4436-a92f-a5a5d44fdc11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.553625 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0104f7d3-7be4-411b-8ca7-89c72b31b43d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0104f7d3-7be4-411b-8ca7-89c72b31b43d" (UID: "0104f7d3-7be4-411b-8ca7-89c72b31b43d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.555743 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5de37217-5d22-4a9e-9e27-f1ed05b2d63e" (UID: "5de37217-5d22-4a9e-9e27-f1ed05b2d63e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.557630 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0104f7d3-7be4-411b-8ca7-89c72b31b43d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.557669 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebde931-976a-4436-a92f-a5a5d44fdc11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.557682 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd381740-3e1d-456b-b5f1-e19f679513da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.557694 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.563764 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebde931-976a-4436-a92f-a5a5d44fdc11-kube-api-access-kwzn2" (OuterVolumeSpecName: "kube-api-access-kwzn2") pod "4ebde931-976a-4436-a92f-a5a5d44fdc11" (UID: "4ebde931-976a-4436-a92f-a5a5d44fdc11"). InnerVolumeSpecName "kube-api-access-kwzn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.565147 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-kube-api-access-t9swm" (OuterVolumeSpecName: "kube-api-access-t9swm") pod "5de37217-5d22-4a9e-9e27-f1ed05b2d63e" (UID: "5de37217-5d22-4a9e-9e27-f1ed05b2d63e"). InnerVolumeSpecName "kube-api-access-t9swm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.569685 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0104f7d3-7be4-411b-8ca7-89c72b31b43d-kube-api-access-4cfgr" (OuterVolumeSpecName: "kube-api-access-4cfgr") pod "0104f7d3-7be4-411b-8ca7-89c72b31b43d" (UID: "0104f7d3-7be4-411b-8ca7-89c72b31b43d"). InnerVolumeSpecName "kube-api-access-4cfgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.571046 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd381740-3e1d-456b-b5f1-e19f679513da-kube-api-access-ngcr7" (OuterVolumeSpecName: "kube-api-access-ngcr7") pod "cd381740-3e1d-456b-b5f1-e19f679513da" (UID: "cd381740-3e1d-456b-b5f1-e19f679513da"). InnerVolumeSpecName "kube-api-access-ngcr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.582634 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c9jld"] Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.659396 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-operator-scripts\") pod \"root-account-create-update-c9jld\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.660042 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8v5\" (UniqueName: \"kubernetes.io/projected/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-kube-api-access-mc8v5\") pod \"root-account-create-update-c9jld\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.660422 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwzn2\" (UniqueName: \"kubernetes.io/projected/4ebde931-976a-4436-a92f-a5a5d44fdc11-kube-api-access-kwzn2\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.660448 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngcr7\" (UniqueName: \"kubernetes.io/projected/cd381740-3e1d-456b-b5f1-e19f679513da-kube-api-access-ngcr7\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.660461 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cfgr\" (UniqueName: \"kubernetes.io/projected/0104f7d3-7be4-411b-8ca7-89c72b31b43d-kube-api-access-4cfgr\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.660474 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9swm\" (UniqueName: \"kubernetes.io/projected/5de37217-5d22-4a9e-9e27-f1ed05b2d63e-kube-api-access-t9swm\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.764020 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-operator-scripts\") pod \"root-account-create-update-c9jld\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.764246 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-operator-scripts\") pod \"root-account-create-update-c9jld\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.764645 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8v5\" (UniqueName: \"kubernetes.io/projected/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-kube-api-access-mc8v5\") pod \"root-account-create-update-c9jld\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.800074 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8v5\" (UniqueName: \"kubernetes.io/projected/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-kube-api-access-mc8v5\") pod \"root-account-create-update-c9jld\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:49 crc kubenswrapper[4721]: I0128 18:54:49.857696 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c9jld" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.381314 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-efa1-account-create-update-kvj9r" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.381889 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-efa1-account-create-update-kvj9r" event={"ID":"5de37217-5d22-4a9e-9e27-f1ed05b2d63e","Type":"ContainerDied","Data":"70a8afd5a503d479076427edc620a497df76fb20f38c4bacdaae38b660cfdfb1"} Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.381946 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a8afd5a503d479076427edc620a497df76fb20f38c4bacdaae38b660cfdfb1" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.385992 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7tqqv" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.386071 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7tqqv" event={"ID":"4ebde931-976a-4436-a92f-a5a5d44fdc11","Type":"ContainerDied","Data":"fd94c2d1cf0dc6262565413a1d7fa0e47602afcc179e6429a6e88ff9f73dff6c"} Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.386133 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd94c2d1cf0dc6262565413a1d7fa0e47602afcc179e6429a6e88ff9f73dff6c" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.389742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" event={"ID":"cd381740-3e1d-456b-b5f1-e19f679513da","Type":"ContainerDied","Data":"925aab1ae8cb0560960678f0207ad696a04fa9018bbd29c34d265e83fa4b6c76"} Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.389863 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="925aab1ae8cb0560960678f0207ad696a04fa9018bbd29c34d265e83fa4b6c76" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.389984 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-0f57-account-create-update-qgx95" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.399036 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-hs5gk" event={"ID":"0104f7d3-7be4-411b-8ca7-89c72b31b43d","Type":"ContainerDied","Data":"d44e08ef60ded79964316ab15eabcdf28b4c796a7cfa973db0744165e4162fa4"} Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.399080 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44e08ef60ded79964316ab15eabcdf28b4c796a7cfa973db0744165e4162fa4" Jan 28 18:54:50 crc kubenswrapper[4721]: I0128 18:54:50.399101 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-hs5gk" Jan 28 18:54:54 crc kubenswrapper[4721]: I0128 18:54:54.248996 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 18:54:54 crc kubenswrapper[4721]: I0128 18:54:54.254832 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 18:54:54 crc kubenswrapper[4721]: I0128 18:54:54.452036 4721 generic.go:334] "Generic (PLEG): container finished" podID="d06bcf83-999f-419a-9f4f-4e6544576897" containerID="f8a010946fc4ca83ba26f0c6cbb1e4ecbcec583b97d5f5919401a331f6006efe" exitCode=0 Jan 28 18:54:54 crc kubenswrapper[4721]: I0128 18:54:54.452118 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7bhzw" event={"ID":"d06bcf83-999f-419a-9f4f-4e6544576897","Type":"ContainerDied","Data":"f8a010946fc4ca83ba26f0c6cbb1e4ecbcec583b97d5f5919401a331f6006efe"} Jan 28 18:54:54 crc kubenswrapper[4721]: I0128 18:54:54.457812 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 18:54:57 crc kubenswrapper[4721]: I0128 18:54:57.857560 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:54:57 crc kubenswrapper[4721]: I0128 18:54:57.858459 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="prometheus" containerID="cri-o://365b89612323010992f8c935a6d68a6f1a2b9b8026b23b3f9697702e022b7a58" gracePeriod=600 Jan 28 18:54:57 crc kubenswrapper[4721]: I0128 18:54:57.858555 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="thanos-sidecar" containerID="cri-o://ef9e2874d303519c1d9d8f1228aa76c719e69f1ebbaff4752a1ee9bb9fc16828" gracePeriod=600 Jan 28 18:54:57 crc kubenswrapper[4721]: I0128 18:54:57.858610 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="config-reloader" containerID="cri-o://1e0b53a2f639a3be2e058e566db6a36ccce965fc3410944527bfcc44b65816a5" gracePeriod=600 Jan 28 18:54:58 crc kubenswrapper[4721]: I0128 18:54:58.504523 4721 generic.go:334] "Generic (PLEG): container finished" podID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerID="ef9e2874d303519c1d9d8f1228aa76c719e69f1ebbaff4752a1ee9bb9fc16828" exitCode=0 Jan 28 18:54:58 crc kubenswrapper[4721]: I0128 18:54:58.504552 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerDied","Data":"ef9e2874d303519c1d9d8f1228aa76c719e69f1ebbaff4752a1ee9bb9fc16828"} Jan 28 18:54:58 crc kubenswrapper[4721]: I0128 18:54:58.504626 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerDied","Data":"1e0b53a2f639a3be2e058e566db6a36ccce965fc3410944527bfcc44b65816a5"} Jan 28 18:54:58 crc kubenswrapper[4721]: I0128 18:54:58.504577 4721 generic.go:334] "Generic (PLEG): container finished" podID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerID="1e0b53a2f639a3be2e058e566db6a36ccce965fc3410944527bfcc44b65816a5" exitCode=0 Jan 28 18:54:58 crc kubenswrapper[4721]: I0128 18:54:58.504672 4721 generic.go:334] "Generic (PLEG): container finished" podID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerID="365b89612323010992f8c935a6d68a6f1a2b9b8026b23b3f9697702e022b7a58" exitCode=0 Jan 28 18:54:58 crc kubenswrapper[4721]: I0128 18:54:58.504701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerDied","Data":"365b89612323010992f8c935a6d68a6f1a2b9b8026b23b3f9697702e022b7a58"} Jan 28 18:54:59 crc kubenswrapper[4721]: I0128 18:54:59.247146 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.115:9090/-/ready\": dial tcp 10.217.0.115:9090: connect: connection refused" Jan 28 18:55:01 crc kubenswrapper[4721]: I0128 18:55:01.225706 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:55:01 crc kubenswrapper[4721]: I0128 18:55:01.226815 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:55:03 crc kubenswrapper[4721]: I0128 18:55:03.259591 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:55:03 crc kubenswrapper[4721]: I0128 18:55:03.268268 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa657a81-842e-4292-a71e-e208b4c0bd69-etc-swift\") pod \"swift-storage-0\" (UID: \"aa657a81-842e-4292-a71e-e208b4c0bd69\") " pod="openstack/swift-storage-0" Jan 28 18:55:03 crc kubenswrapper[4721]: I0128 18:55:03.457577 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 18:55:04 crc kubenswrapper[4721]: I0128 18:55:04.247506 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.115:9090/-/ready\": dial tcp 10.217.0.115:9090: connect: connection refused" Jan 28 18:55:04 crc kubenswrapper[4721]: E0128 18:55:04.866258 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 28 18:55:04 crc kubenswrapper[4721]: E0128 18:55:04.866874 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxl8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-j284c_openstack(7b2b2524-50e6-4d73-bdb9-8770b642481e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:55:04 crc kubenswrapper[4721]: E0128 18:55:04.868271 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-j284c" podUID="7b2b2524-50e6-4d73-bdb9-8770b642481e" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.072244 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.097001 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-swiftconf\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.120568 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7z4j\" (UniqueName: \"kubernetes.io/projected/d06bcf83-999f-419a-9f4f-4e6544576897-kube-api-access-g7z4j\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.120678 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-dispersionconf\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.120809 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-ring-data-devices\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.120869 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d06bcf83-999f-419a-9f4f-4e6544576897-etc-swift\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.120918 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-combined-ca-bundle\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.120998 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-scripts\") pod \"d06bcf83-999f-419a-9f4f-4e6544576897\" (UID: \"d06bcf83-999f-419a-9f4f-4e6544576897\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.122353 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.129674 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06bcf83-999f-419a-9f4f-4e6544576897-kube-api-access-g7z4j" (OuterVolumeSpecName: "kube-api-access-g7z4j") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "kube-api-access-g7z4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.138917 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06bcf83-999f-419a-9f4f-4e6544576897-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.144217 4721 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.153000 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.178103 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.180215 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.189368 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.207519 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-scripts" (OuterVolumeSpecName: "scripts") pod "d06bcf83-999f-419a-9f4f-4e6544576897" (UID: "d06bcf83-999f-419a-9f4f-4e6544576897"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.247403 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.247453 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d06bcf83-999f-419a-9f4f-4e6544576897-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.247471 4721 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.247486 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7z4j\" (UniqueName: \"kubernetes.io/projected/d06bcf83-999f-419a-9f4f-4e6544576897-kube-api-access-g7z4j\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.247502 4721 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d06bcf83-999f-419a-9f4f-4e6544576897-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.247512 4721 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d06bcf83-999f-419a-9f4f-4e6544576897-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348325 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-web-config\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348395 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-tls-assets\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348557 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-0\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348609 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-1\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348800 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-2\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348830 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-thanos-prometheus-http-client-file\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.348927 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.349237 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.349260 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.349268 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.350123 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.350330 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config-out\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.350477 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z979p\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-kube-api-access-z979p\") pod \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\" (UID: \"dc3781f4-04ef-40f3-b772-88deb9a9e3b6\") " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.355636 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.356968 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config-out" (OuterVolumeSpecName: "config-out") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.357118 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-kube-api-access-z979p" (OuterVolumeSpecName: "kube-api-access-z979p") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "kube-api-access-z979p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.357424 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.358881 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config" (OuterVolumeSpecName: "config") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359003 4721 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config-out\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359041 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z979p\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-kube-api-access-z979p\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359056 4721 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359069 4721 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359082 4721 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359093 4721 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.359105 4721 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.379446 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-web-config" (OuterVolumeSpecName: "web-config") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.383647 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "dc3781f4-04ef-40f3-b772-88deb9a9e3b6" (UID: "dc3781f4-04ef-40f3-b772-88deb9a9e3b6"). InnerVolumeSpecName "pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.422353 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c9jld"] Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.435662 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.461635 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.461703 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") on node \"crc\" " Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.461719 4721 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc3781f4-04ef-40f3-b772-88deb9a9e3b6-web-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.489242 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.489574 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f") on node "crc" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.565005 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.575905 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:55:05 crc kubenswrapper[4721]: W0128 18:55:05.576080 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa657a81_842e_4292_a71e_e208b4c0bd69.slice/crio-3c1106d63c165631ecbf95aba6950f0d47c54dfe2acaac0b6e233bac20d3654c WatchSource:0}: Error finding container 3c1106d63c165631ecbf95aba6950f0d47c54dfe2acaac0b6e233bac20d3654c: Status 404 returned error can't find the container with id 3c1106d63c165631ecbf95aba6950f0d47c54dfe2acaac0b6e233bac20d3654c Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.600365 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dc3781f4-04ef-40f3-b772-88deb9a9e3b6","Type":"ContainerDied","Data":"d1c7216022dc45649031a414b476b1f5d1318c7a1ae7fb7a52780ebf8bfb148d"} Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.600434 4721 scope.go:117] "RemoveContainer" containerID="ef9e2874d303519c1d9d8f1228aa76c719e69f1ebbaff4752a1ee9bb9fc16828" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.600584 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.606047 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wppp8" event={"ID":"06674c33-d387-4999-9e87-d72f80b98173","Type":"ContainerStarted","Data":"87b52b27d9e18cb3bfef076ebb8b401f3b1d2e0cec15367a10090eed3dafb376"} Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.614331 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"3c1106d63c165631ecbf95aba6950f0d47c54dfe2acaac0b6e233bac20d3654c"} Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.627830 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7bhzw" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.627832 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7bhzw" event={"ID":"d06bcf83-999f-419a-9f4f-4e6544576897","Type":"ContainerDied","Data":"d3033583097ddc5adca805fee37257f88854f57b2c4e1333946640414861b995"} Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.628124 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3033583097ddc5adca805fee37257f88854f57b2c4e1333946640414861b995" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.631960 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c9jld" event={"ID":"1d9296aa-fff6-4aa4-afb6-56acc232bbc7","Type":"ContainerStarted","Data":"e5453f86f220ebc97098e705a48b19114920fe7f4dea993b075711e910f84943"} Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.637604 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wppp8" podStartSLOduration=3.170312402 podStartE2EDuration="22.637531573s" podCreationTimestamp="2026-01-28 18:54:43 +0000 UTC" firstStartedPulling="2026-01-28 18:54:45.451878636 +0000 UTC m=+1251.177184196" lastFinishedPulling="2026-01-28 18:55:04.919097807 +0000 UTC m=+1270.644403367" observedRunningTime="2026-01-28 18:55:05.623026566 +0000 UTC m=+1271.348332136" watchObservedRunningTime="2026-01-28 18:55:05.637531573 +0000 UTC m=+1271.362837153" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.645303 4721 scope.go:117] "RemoveContainer" containerID="1e0b53a2f639a3be2e058e566db6a36ccce965fc3410944527bfcc44b65816a5" Jan 28 18:55:05 crc kubenswrapper[4721]: E0128 18:55:05.645610 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-j284c" podUID="7b2b2524-50e6-4d73-bdb9-8770b642481e" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.654929 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.669778 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.698025 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:55:05 crc kubenswrapper[4721]: E0128 18:55:05.699110 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="init-config-reloader" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.699189 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="init-config-reloader" Jan 28 18:55:05 crc kubenswrapper[4721]: E0128 18:55:05.699361 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="prometheus" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.699379 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="prometheus" Jan 28 18:55:05 crc kubenswrapper[4721]: E0128 18:55:05.699409 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="thanos-sidecar" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.699442 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="thanos-sidecar" Jan 28 18:55:05 crc kubenswrapper[4721]: E0128 18:55:05.699464 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06bcf83-999f-419a-9f4f-4e6544576897" containerName="swift-ring-rebalance" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.699474 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06bcf83-999f-419a-9f4f-4e6544576897" containerName="swift-ring-rebalance" Jan 28 18:55:05 crc kubenswrapper[4721]: E0128 18:55:05.699484 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="config-reloader" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.699492 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="config-reloader" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.698050 4721 scope.go:117] "RemoveContainer" containerID="365b89612323010992f8c935a6d68a6f1a2b9b8026b23b3f9697702e022b7a58" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.704553 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06bcf83-999f-419a-9f4f-4e6544576897" containerName="swift-ring-rebalance" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.704611 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="thanos-sidecar" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.704678 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="config-reloader" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.704705 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" containerName="prometheus" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.712951 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.724086 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zmptf" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.724380 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.724517 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.724663 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.726589 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.726617 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.726749 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.726876 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.727840 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.733928 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.787501 4721 scope.go:117] "RemoveContainer" containerID="e1472b3e544be64b3e29964dc712e9e0c6c5bd0aeed58e5b9bad95265232217c" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880228 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880297 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880336 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880552 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ac81a5a-78b3-43c6-964f-300e126ba4ca-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880687 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880726 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880751 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj72v\" (UniqueName: \"kubernetes.io/projected/8ac81a5a-78b3-43c6-964f-300e126ba4ca-kube-api-access-vj72v\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880790 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880815 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880853 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880871 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ac81a5a-78b3-43c6-964f-300e126ba4ca-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880908 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.880946 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.983675 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.983859 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.983905 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.983958 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ac81a5a-78b3-43c6-964f-300e126ba4ca-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984065 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984097 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984126 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj72v\" (UniqueName: \"kubernetes.io/projected/8ac81a5a-78b3-43c6-964f-300e126ba4ca-kube-api-access-vj72v\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984193 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984285 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984305 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ac81a5a-78b3-43c6-964f-300e126ba4ca-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984332 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.984819 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.985652 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.985869 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ac81a5a-78b3-43c6-964f-300e126ba4ca-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.987736 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.987784 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d38c0b121d5295a147080ad18debad98481eaf07feef18cd6048e41a66022495/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.996507 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:05 crc kubenswrapper[4721]: I0128 18:55:05.997672 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ac81a5a-78b3-43c6-964f-300e126ba4ca-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.000660 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.001098 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.003459 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.008481 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.008783 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8ac81a5a-78b3-43c6-964f-300e126ba4ca-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.012932 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj72v\" (UniqueName: \"kubernetes.io/projected/8ac81a5a-78b3-43c6-964f-300e126ba4ca-kube-api-access-vj72v\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.013875 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ac81a5a-78b3-43c6-964f-300e126ba4ca-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.048079 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65f8c88a-05fd-41aa-a62f-62b8de64d97f\") pod \"prometheus-metric-storage-0\" (UID: \"8ac81a5a-78b3-43c6-964f-300e126ba4ca\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.059757 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.577921 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.648072 4721 generic.go:334] "Generic (PLEG): container finished" podID="1d9296aa-fff6-4aa4-afb6-56acc232bbc7" containerID="6b1302db28f921c465f5629bcda6656cba736c2d2ead364062d9e7d8636b730d" exitCode=0 Jan 28 18:55:06 crc kubenswrapper[4721]: I0128 18:55:06.648271 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c9jld" event={"ID":"1d9296aa-fff6-4aa4-afb6-56acc232bbc7","Type":"ContainerDied","Data":"6b1302db28f921c465f5629bcda6656cba736c2d2ead364062d9e7d8636b730d"} Jan 28 18:55:07 crc kubenswrapper[4721]: I0128 18:55:07.547786 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3781f4-04ef-40f3-b772-88deb9a9e3b6" path="/var/lib/kubelet/pods/dc3781f4-04ef-40f3-b772-88deb9a9e3b6/volumes" Jan 28 18:55:07 crc kubenswrapper[4721]: I0128 18:55:07.663119 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ac81a5a-78b3-43c6-964f-300e126ba4ca","Type":"ContainerStarted","Data":"a34c68e9c74069bcf2e3001e176a77cdf7015a88dfec73d04916ab73cc8bf513"} Jan 28 18:55:07 crc kubenswrapper[4721]: I0128 18:55:07.668190 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"1c74268ee3a9a2d207b3b4fb100a899886b3f450c7f55a877d15120a6b01f053"} Jan 28 18:55:07 crc kubenswrapper[4721]: I0128 18:55:07.668250 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"608b7f3a1f9c11e7f8c0d054ffa4315a6d649fc0bda7b56a296e98fc756d4280"} Jan 28 18:55:07 crc kubenswrapper[4721]: I0128 18:55:07.668267 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"9e66626bf474a6b0be2de6f7cbcc4e417c12b01c378cf57fddca94bced2f7b6e"} Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.162965 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c9jld" Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.252929 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-operator-scripts\") pod \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.253157 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc8v5\" (UniqueName: \"kubernetes.io/projected/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-kube-api-access-mc8v5\") pod \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\" (UID: \"1d9296aa-fff6-4aa4-afb6-56acc232bbc7\") " Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.253940 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d9296aa-fff6-4aa4-afb6-56acc232bbc7" (UID: "1d9296aa-fff6-4aa4-afb6-56acc232bbc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.254618 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.259461 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-kube-api-access-mc8v5" (OuterVolumeSpecName: "kube-api-access-mc8v5") pod "1d9296aa-fff6-4aa4-afb6-56acc232bbc7" (UID: "1d9296aa-fff6-4aa4-afb6-56acc232bbc7"). InnerVolumeSpecName "kube-api-access-mc8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.356718 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc8v5\" (UniqueName: \"kubernetes.io/projected/1d9296aa-fff6-4aa4-afb6-56acc232bbc7-kube-api-access-mc8v5\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.682628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"08ee4a9475d68c4fd3b51c4c2997a6f2ecf138cc5fc1dd9bf399c83ae1333413"} Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.684663 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c9jld" event={"ID":"1d9296aa-fff6-4aa4-afb6-56acc232bbc7","Type":"ContainerDied","Data":"e5453f86f220ebc97098e705a48b19114920fe7f4dea993b075711e910f84943"} Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.684687 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5453f86f220ebc97098e705a48b19114920fe7f4dea993b075711e910f84943" Jan 28 18:55:08 crc kubenswrapper[4721]: I0128 18:55:08.684752 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c9jld" Jan 28 18:55:09 crc kubenswrapper[4721]: I0128 18:55:09.695726 4721 generic.go:334] "Generic (PLEG): container finished" podID="06674c33-d387-4999-9e87-d72f80b98173" containerID="87b52b27d9e18cb3bfef076ebb8b401f3b1d2e0cec15367a10090eed3dafb376" exitCode=0 Jan 28 18:55:09 crc kubenswrapper[4721]: I0128 18:55:09.695805 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wppp8" event={"ID":"06674c33-d387-4999-9e87-d72f80b98173","Type":"ContainerDied","Data":"87b52b27d9e18cb3bfef076ebb8b401f3b1d2e0cec15367a10090eed3dafb376"} Jan 28 18:55:09 crc kubenswrapper[4721]: I0128 18:55:09.699971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"01e6466ae03579a441ecec6e8cc53d5025711bf69b19f6bc78beb844cee02e13"} Jan 28 18:55:10 crc kubenswrapper[4721]: I0128 18:55:10.712832 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ac81a5a-78b3-43c6-964f-300e126ba4ca","Type":"ContainerStarted","Data":"7bd63c1583bd53bad529b278e0de22f5f62fb284dcbd8a13c6e4d933b1fd074b"} Jan 28 18:55:10 crc kubenswrapper[4721]: I0128 18:55:10.721116 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"c4f296713e5eaf01e31e8c1d0f1ec82a7457a9178feb02b782e3581c594f4615"} Jan 28 18:55:10 crc kubenswrapper[4721]: I0128 18:55:10.721189 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"548a99f6dc9820f0b23917fa0be669929f3a5d1a23b67a4164baa5a08363b6ae"} Jan 28 18:55:10 crc kubenswrapper[4721]: I0128 18:55:10.721202 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"3aaa43b2685b8937507e1e52e2bcd68eeff96464437a157f84eadf32a04920e4"} Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.037976 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wppp8" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.117622 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-config-data\") pod \"06674c33-d387-4999-9e87-d72f80b98173\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.117841 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-combined-ca-bundle\") pod \"06674c33-d387-4999-9e87-d72f80b98173\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.117998 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbb52\" (UniqueName: \"kubernetes.io/projected/06674c33-d387-4999-9e87-d72f80b98173-kube-api-access-dbb52\") pod \"06674c33-d387-4999-9e87-d72f80b98173\" (UID: \"06674c33-d387-4999-9e87-d72f80b98173\") " Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.122377 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06674c33-d387-4999-9e87-d72f80b98173-kube-api-access-dbb52" (OuterVolumeSpecName: "kube-api-access-dbb52") pod "06674c33-d387-4999-9e87-d72f80b98173" (UID: "06674c33-d387-4999-9e87-d72f80b98173"). InnerVolumeSpecName "kube-api-access-dbb52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.146424 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06674c33-d387-4999-9e87-d72f80b98173" (UID: "06674c33-d387-4999-9e87-d72f80b98173"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.178034 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-config-data" (OuterVolumeSpecName: "config-data") pod "06674c33-d387-4999-9e87-d72f80b98173" (UID: "06674c33-d387-4999-9e87-d72f80b98173"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.221204 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbb52\" (UniqueName: \"kubernetes.io/projected/06674c33-d387-4999-9e87-d72f80b98173-kube-api-access-dbb52\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.221250 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.221262 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06674c33-d387-4999-9e87-d72f80b98173-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.737281 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wppp8" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.738889 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wppp8" event={"ID":"06674c33-d387-4999-9e87-d72f80b98173","Type":"ContainerDied","Data":"b0295f47bcaf51526f9dae2f813797fefd6c572e0ded6454304ef1106eb01b9e"} Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.738933 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0295f47bcaf51526f9dae2f813797fefd6c572e0ded6454304ef1106eb01b9e" Jan 28 18:55:11 crc kubenswrapper[4721]: I0128 18:55:11.995591 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-fc8zl"] Jan 28 18:55:12 crc kubenswrapper[4721]: E0128 18:55:12.006922 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06674c33-d387-4999-9e87-d72f80b98173" containerName="keystone-db-sync" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.007267 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="06674c33-d387-4999-9e87-d72f80b98173" containerName="keystone-db-sync" Jan 28 18:55:12 crc kubenswrapper[4721]: E0128 18:55:12.007388 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9296aa-fff6-4aa4-afb6-56acc232bbc7" containerName="mariadb-account-create-update" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.007446 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9296aa-fff6-4aa4-afb6-56acc232bbc7" containerName="mariadb-account-create-update" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.007756 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9296aa-fff6-4aa4-afb6-56acc232bbc7" containerName="mariadb-account-create-update" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.007857 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="06674c33-d387-4999-9e87-d72f80b98173" containerName="keystone-db-sync" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.010767 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.025778 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-fc8zl"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.041262 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-76fdw"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.043182 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.062436 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.062501 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.062734 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gfv9p" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.063430 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.063587 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.123361 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-76fdw"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.141145 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxxg\" (UniqueName: \"kubernetes.io/projected/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-kube-api-access-cmxxg\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.141578 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-scripts\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.141730 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-dns-svc\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.141802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.141866 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rz2\" (UniqueName: \"kubernetes.io/projected/0945ec68-90fe-4d65-9910-372bd3c9e88b-kube-api-access-v4rz2\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.141935 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-config\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.142015 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.142119 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-credential-keys\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.142262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-fernet-keys\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.142333 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-config-data\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.142422 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-combined-ca-bundle\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.232312 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-g5v9q"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.233924 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.237491 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.239385 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qmz2x" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.239471 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244467 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxxg\" (UniqueName: \"kubernetes.io/projected/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-kube-api-access-cmxxg\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244561 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-scripts\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244611 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-dns-svc\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244631 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244658 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rz2\" (UniqueName: \"kubernetes.io/projected/0945ec68-90fe-4d65-9910-372bd3c9e88b-kube-api-access-v4rz2\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244695 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-config\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244786 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-credential-keys\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244831 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-fernet-keys\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244852 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-config-data\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.244910 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-combined-ca-bundle\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.250044 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-combined-ca-bundle\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.252455 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-config\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.253024 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-dns-svc\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.253623 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.255130 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.259828 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g5v9q"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.271008 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-fernet-keys\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.275613 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-scripts\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.280800 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxxg\" (UniqueName: \"kubernetes.io/projected/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-kube-api-access-cmxxg\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.287371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-config-data\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.288108 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-credential-keys\") pod \"keystone-bootstrap-76fdw\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.311970 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rz2\" (UniqueName: \"kubernetes.io/projected/0945ec68-90fe-4d65-9910-372bd3c9e88b-kube-api-access-v4rz2\") pod \"dnsmasq-dns-f877ddd87-fc8zl\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.315248 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-spxh4"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.318858 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.335300 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2g228" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.335805 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.335907 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.348713 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-config\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.348854 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhsct\" (UniqueName: \"kubernetes.io/projected/d03058f5-d416-467a-b33c-36de7e5b6008-kube-api-access-xhsct\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.348985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-combined-ca-bundle\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.350408 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-spxh4"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.431122 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.443869 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452715 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jrb\" (UniqueName: \"kubernetes.io/projected/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-kube-api-access-d2jrb\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452785 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-config\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452832 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-config-data\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452847 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-db-sync-config-data\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhsct\" (UniqueName: \"kubernetes.io/projected/d03058f5-d416-467a-b33c-36de7e5b6008-kube-api-access-xhsct\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452923 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-combined-ca-bundle\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452960 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-etc-machine-id\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.452985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-scripts\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.453009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-combined-ca-bundle\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.468527 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-combined-ca-bundle\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.485013 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-config\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.490445 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.504951 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.510024 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.510284 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.526306 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhsct\" (UniqueName: \"kubernetes.io/projected/d03058f5-d416-467a-b33c-36de7e5b6008-kube-api-access-xhsct\") pod \"neutron-db-sync-g5v9q\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.544556 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.554924 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-config-data\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.554985 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-db-sync-config-data\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555046 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-log-httpd\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555105 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-config-data\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555139 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-run-httpd\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555166 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmnw\" (UniqueName: \"kubernetes.io/projected/a423fddb-4a71-416a-8138-63d58b0350fb-kube-api-access-7xmnw\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555210 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-combined-ca-bundle\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555261 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555288 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-etc-machine-id\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555317 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555346 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-scripts\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jrb\" (UniqueName: \"kubernetes.io/projected/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-kube-api-access-d2jrb\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555524 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-scripts\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.555671 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-etc-machine-id\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.591266 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-fc8zl"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.610347 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-scripts\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.622274 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gdk5z"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.623823 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.645955 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.646259 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.650688 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rv5sq" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.662496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.662671 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-logs\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.662763 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-scripts\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.663248 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jrb\" (UniqueName: \"kubernetes.io/projected/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-kube-api-access-d2jrb\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673595 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-config-data\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673738 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-combined-ca-bundle\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673790 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-log-httpd\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673822 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-scripts\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673847 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-config-data\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673903 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h95x\" (UniqueName: \"kubernetes.io/projected/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-kube-api-access-9h95x\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673938 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-run-httpd\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.673978 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmnw\" (UniqueName: \"kubernetes.io/projected/a423fddb-4a71-416a-8138-63d58b0350fb-kube-api-access-7xmnw\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.674093 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.675051 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-log-httpd\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.676247 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.678572 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-run-httpd\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.681321 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-config-data\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.681725 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-db-sync-config-data\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.709441 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-combined-ca-bundle\") pod \"cinder-db-sync-spxh4\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.723375 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.733496 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-scripts\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.734101 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gdk5z"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.738497 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-config-data\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.741202 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmnw\" (UniqueName: \"kubernetes.io/projected/a423fddb-4a71-416a-8138-63d58b0350fb-kube-api-access-7xmnw\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.770472 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-spxh4" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.776582 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.780401 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-config-data\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.780496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-combined-ca-bundle\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.780563 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-scripts\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.780620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h95x\" (UniqueName: \"kubernetes.io/projected/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-kube-api-access-9h95x\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.780897 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-logs\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.782200 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-logs\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.812342 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-config-data\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.812394 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-combined-ca-bundle\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.812540 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-scripts\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.868011 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.899404 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-5lt72"] Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.962043 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h95x\" (UniqueName: \"kubernetes.io/projected/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-kube-api-access-9h95x\") pod \"placement-db-sync-gdk5z\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:12 crc kubenswrapper[4721]: I0128 18:55:12.972412 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.048987 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4rqtv"] Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.051008 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.066627 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kxtdz" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.066924 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.090030 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-qbnjm"] Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.091979 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.095276 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.095726 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.096020 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-wcp5f" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.096584 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.107353 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-5lt72"] Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.148895 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4rqtv"] Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.160381 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-qbnjm"] Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167598 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-combined-ca-bundle\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167676 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167704 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-config\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167743 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcbb\" (UniqueName: \"kubernetes.io/projected/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-kube-api-access-lfcbb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167817 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-db-sync-config-data\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167845 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vch8p\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-kube-api-access-vch8p\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-certs\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167931 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlls6\" (UniqueName: \"kubernetes.io/projected/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-kube-api-access-qlls6\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.167971 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-config-data\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.168002 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.168041 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-scripts\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.168072 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-combined-ca-bundle\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.168099 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.200919 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gdk5z" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.272140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-combined-ca-bundle\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.272223 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.272267 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-combined-ca-bundle\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.272323 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273575 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-config\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273649 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcbb\" (UniqueName: \"kubernetes.io/projected/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-kube-api-access-lfcbb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-db-sync-config-data\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273829 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vch8p\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-kube-api-access-vch8p\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273879 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-certs\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.273965 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlls6\" (UniqueName: \"kubernetes.io/projected/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-kube-api-access-qlls6\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.274030 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-config-data\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.274074 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.274147 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-scripts\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.280809 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-config\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.281765 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.282432 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.316819 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-combined-ca-bundle\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.318481 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-scripts\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.320555 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-db-sync-config-data\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.320774 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-combined-ca-bundle\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.321311 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcbb\" (UniqueName: \"kubernetes.io/projected/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-kube-api-access-lfcbb\") pod \"dnsmasq-dns-68dcc9cf6f-5lt72\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.322146 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlls6\" (UniqueName: \"kubernetes.io/projected/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-kube-api-access-qlls6\") pod \"barbican-db-sync-4rqtv\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.322393 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-config-data\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.339013 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vch8p\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-kube-api-access-vch8p\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.341707 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-certs\") pod \"cloudkitty-db-sync-qbnjm\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.366394 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.527248 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.554419 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.694333 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-fc8zl"] Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.920392 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" event={"ID":"0945ec68-90fe-4d65-9910-372bd3c9e88b","Type":"ContainerStarted","Data":"593b75319bee73c1123a492cedb8e4834aabaf5e65d0bab19a998adb1be75186"} Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.973759 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"69f8da15c5fd7fe031dccf5b73553850539afc489ac97e958e06f01235e02864"} Jan 28 18:55:13 crc kubenswrapper[4721]: I0128 18:55:13.973832 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"e7eeb3420c5fd14e7b7765dc39a373e62fb5b0f72ea092331d83c10abc6cea91"} Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.183990 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-76fdw"] Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.217387 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-spxh4"] Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.228696 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:55:14 crc kubenswrapper[4721]: W0128 18:55:14.254407 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58139529_dfb2_4f83_bd8f_ed6645cf0e6d.slice/crio-5ef3734e2c90c47606cce721b6648ef517472c9530a40e159a03fec66e5a05ac WatchSource:0}: Error finding container 5ef3734e2c90c47606cce721b6648ef517472c9530a40e159a03fec66e5a05ac: Status 404 returned error can't find the container with id 5ef3734e2c90c47606cce721b6648ef517472c9530a40e159a03fec66e5a05ac Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.264139 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g5v9q"] Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.582262 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gdk5z"] Jan 28 18:55:14 crc kubenswrapper[4721]: W0128 18:55:14.632392 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ceee9a0_8f8f_46cc_a090_f31b224fe8a9.slice/crio-0a663b493fae0ba462d64aee50e91118db9f01bc55e347f17276632bddf90ef2 WatchSource:0}: Error finding container 0a663b493fae0ba462d64aee50e91118db9f01bc55e347f17276632bddf90ef2: Status 404 returned error can't find the container with id 0a663b493fae0ba462d64aee50e91118db9f01bc55e347f17276632bddf90ef2 Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.639347 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4rqtv"] Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.655758 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-5lt72"] Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.672162 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-qbnjm"] Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.949369 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" event={"ID":"da543bfe-af20-47a0-b1a2-0bd8b36a94ec","Type":"ContainerStarted","Data":"2849e273c7fb7b907d2424a8e420b65823e17bfb993a950cdd083d479988c9db"} Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.951330 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a423fddb-4a71-416a-8138-63d58b0350fb","Type":"ContainerStarted","Data":"d401325b9e0306d71f3e564195f62ee8d4ae93c32d74ef8516ca8ebb722e700f"} Jan 28 18:55:14 crc kubenswrapper[4721]: I0128 18:55:14.952817 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-spxh4" event={"ID":"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7","Type":"ContainerStarted","Data":"5e26bd5e240e1a23b8a12be4bb2bf825ede9379c912e49630365455c852645d3"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.004377 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"f1db8d201dcdee7da01068241b2c84b61720f5dfc3e9ff15f6b906db4fa422f6"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.004436 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"0fc7a19a37b652441e07390bd45855de6532b84f939f1d9feed9b7d7f43c5588"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.012571 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4rqtv" event={"ID":"5a0d808e-8db2-4d8b-a02e-5f04c991fb44","Type":"ContainerStarted","Data":"6fbb0d60559c0c77e834c58ae99800ff9b12243ae23ea2a2761f5bbcd6f6f1a5"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.025396 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-qbnjm" event={"ID":"6d4d13db-d2ce-4194-841a-c50b85a2887c","Type":"ContainerStarted","Data":"f6a20e30099548112c706cf98bb8abea7f1731b3bd5208ec2ec7cc6691dc20ae"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.061297 4721 generic.go:334] "Generic (PLEG): container finished" podID="0945ec68-90fe-4d65-9910-372bd3c9e88b" containerID="2247c224e991f8af8ae9c8db92c7a0174e4723ecf399e515e42cd18e411af53e" exitCode=0 Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.062303 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" event={"ID":"0945ec68-90fe-4d65-9910-372bd3c9e88b","Type":"ContainerDied","Data":"2247c224e991f8af8ae9c8db92c7a0174e4723ecf399e515e42cd18e411af53e"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.096326 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gdk5z" event={"ID":"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9","Type":"ContainerStarted","Data":"0a663b493fae0ba462d64aee50e91118db9f01bc55e347f17276632bddf90ef2"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.193640 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g5v9q" event={"ID":"d03058f5-d416-467a-b33c-36de7e5b6008","Type":"ContainerStarted","Data":"dea0b4596c32b14aa6c542395de1c7c3b3e8187a0308f5d186e77f72d7edd84b"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.193709 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g5v9q" event={"ID":"d03058f5-d416-467a-b33c-36de7e5b6008","Type":"ContainerStarted","Data":"7a7a11f296a5bc93b1de74ee528f9312ef8584a3f818339fca072364280ff421"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.219424 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76fdw" event={"ID":"58139529-dfb2-4f83-bd8f-ed6645cf0e6d","Type":"ContainerStarted","Data":"4bb868c782027b9450929d28db2ba013267b613e7f87e574cbc9a843f19d54ac"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.219522 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76fdw" event={"ID":"58139529-dfb2-4f83-bd8f-ed6645cf0e6d","Type":"ContainerStarted","Data":"5ef3734e2c90c47606cce721b6648ef517472c9530a40e159a03fec66e5a05ac"} Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.232485 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-g5v9q" podStartSLOduration=3.232465292 podStartE2EDuration="3.232465292s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:55:15.230974795 +0000 UTC m=+1280.956280355" watchObservedRunningTime="2026-01-28 18:55:15.232465292 +0000 UTC m=+1280.957770852" Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.308579 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-76fdw" podStartSLOduration=3.30854065 podStartE2EDuration="3.30854065s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:55:15.26889951 +0000 UTC m=+1280.994205090" watchObservedRunningTime="2026-01-28 18:55:15.30854065 +0000 UTC m=+1281.033846220" Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.356280 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:55:15 crc kubenswrapper[4721]: I0128 18:55:15.974991 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.015940 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-nb\") pod \"0945ec68-90fe-4d65-9910-372bd3c9e88b\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.015999 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-sb\") pod \"0945ec68-90fe-4d65-9910-372bd3c9e88b\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.016055 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rz2\" (UniqueName: \"kubernetes.io/projected/0945ec68-90fe-4d65-9910-372bd3c9e88b-kube-api-access-v4rz2\") pod \"0945ec68-90fe-4d65-9910-372bd3c9e88b\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.016086 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-dns-svc\") pod \"0945ec68-90fe-4d65-9910-372bd3c9e88b\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.016145 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-config\") pod \"0945ec68-90fe-4d65-9910-372bd3c9e88b\" (UID: \"0945ec68-90fe-4d65-9910-372bd3c9e88b\") " Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.022992 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0945ec68-90fe-4d65-9910-372bd3c9e88b-kube-api-access-v4rz2" (OuterVolumeSpecName: "kube-api-access-v4rz2") pod "0945ec68-90fe-4d65-9910-372bd3c9e88b" (UID: "0945ec68-90fe-4d65-9910-372bd3c9e88b"). InnerVolumeSpecName "kube-api-access-v4rz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.085699 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-config" (OuterVolumeSpecName: "config") pod "0945ec68-90fe-4d65-9910-372bd3c9e88b" (UID: "0945ec68-90fe-4d65-9910-372bd3c9e88b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.110020 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0945ec68-90fe-4d65-9910-372bd3c9e88b" (UID: "0945ec68-90fe-4d65-9910-372bd3c9e88b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.127614 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.129321 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rz2\" (UniqueName: \"kubernetes.io/projected/0945ec68-90fe-4d65-9910-372bd3c9e88b-kube-api-access-v4rz2\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.129423 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.150284 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0945ec68-90fe-4d65-9910-372bd3c9e88b" (UID: "0945ec68-90fe-4d65-9910-372bd3c9e88b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.182372 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0945ec68-90fe-4d65-9910-372bd3c9e88b" (UID: "0945ec68-90fe-4d65-9910-372bd3c9e88b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.233013 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.233047 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0945ec68-90fe-4d65-9910-372bd3c9e88b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.285183 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"aabed5ed497e4373eed0a3a29fca3c1b4f422125a31aad4a723b60ae43c6acca"} Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.285234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"f4901bc694e72b2da77b58f65dfac7e8c81e316884c9bc70ee282f126eca21e9"} Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.289450 4721 generic.go:334] "Generic (PLEG): container finished" podID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerID="20209350852a27344ac1486d2db32118ee3a2aa855ac94f2a4973d94261356e0" exitCode=0 Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.289613 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" event={"ID":"da543bfe-af20-47a0-b1a2-0bd8b36a94ec","Type":"ContainerDied","Data":"20209350852a27344ac1486d2db32118ee3a2aa855ac94f2a4973d94261356e0"} Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.297150 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" event={"ID":"0945ec68-90fe-4d65-9910-372bd3c9e88b","Type":"ContainerDied","Data":"593b75319bee73c1123a492cedb8e4834aabaf5e65d0bab19a998adb1be75186"} Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.297251 4721 scope.go:117] "RemoveContainer" containerID="2247c224e991f8af8ae9c8db92c7a0174e4723ecf399e515e42cd18e411af53e" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.297391 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-fc8zl" Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.532688 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-fc8zl"] Jan 28 18:55:16 crc kubenswrapper[4721]: I0128 18:55:16.586957 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-fc8zl"] Jan 28 18:55:17 crc kubenswrapper[4721]: I0128 18:55:17.358084 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa657a81-842e-4292-a71e-e208b4c0bd69","Type":"ContainerStarted","Data":"9a0c373317deec06ae6b93d6dc49ef002e535280b4fd27ae00277047831820c4"} Jan 28 18:55:17 crc kubenswrapper[4721]: I0128 18:55:17.372578 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" event={"ID":"da543bfe-af20-47a0-b1a2-0bd8b36a94ec","Type":"ContainerStarted","Data":"8912526981c738c5b40159484558a765f41d6b2ce73d8317fd6a460b86035e4f"} Jan 28 18:55:17 crc kubenswrapper[4721]: I0128 18:55:17.373000 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:17 crc kubenswrapper[4721]: I0128 18:55:17.414293 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=73.0798056 podStartE2EDuration="1m19.414265677s" podCreationTimestamp="2026-01-28 18:53:58 +0000 UTC" firstStartedPulling="2026-01-28 18:55:05.580060213 +0000 UTC m=+1271.305365773" lastFinishedPulling="2026-01-28 18:55:11.91452029 +0000 UTC m=+1277.639825850" observedRunningTime="2026-01-28 18:55:17.407023638 +0000 UTC m=+1283.132329198" watchObservedRunningTime="2026-01-28 18:55:17.414265677 +0000 UTC m=+1283.139571237" Jan 28 18:55:17 crc kubenswrapper[4721]: I0128 18:55:17.457694 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" podStartSLOduration=5.457665774 podStartE2EDuration="5.457665774s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:55:17.438725787 +0000 UTC m=+1283.164031347" watchObservedRunningTime="2026-01-28 18:55:17.457665774 +0000 UTC m=+1283.182971334" Jan 28 18:55:17 crc kubenswrapper[4721]: I0128 18:55:17.560679 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0945ec68-90fe-4d65-9910-372bd3c9e88b" path="/var/lib/kubelet/pods/0945ec68-90fe-4d65-9910-372bd3c9e88b/volumes" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.100023 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-5lt72"] Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.127917 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-2zk48"] Jan 28 18:55:18 crc kubenswrapper[4721]: E0128 18:55:18.131897 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0945ec68-90fe-4d65-9910-372bd3c9e88b" containerName="init" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.131937 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0945ec68-90fe-4d65-9910-372bd3c9e88b" containerName="init" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.132304 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0945ec68-90fe-4d65-9910-372bd3c9e88b" containerName="init" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.134068 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.139525 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.153934 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-2zk48"] Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.260842 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hk7\" (UniqueName: \"kubernetes.io/projected/24b34696-1be6-4ee8-8161-0b3ba8119191-kube-api-access-79hk7\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.260908 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-config\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.260940 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.260966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.261010 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.261128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.362853 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.362927 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hk7\" (UniqueName: \"kubernetes.io/projected/24b34696-1be6-4ee8-8161-0b3ba8119191-kube-api-access-79hk7\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.362951 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-config\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.362997 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.363032 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.363097 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.363805 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.364295 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.367972 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.368373 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-config\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.370624 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.388955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hk7\" (UniqueName: \"kubernetes.io/projected/24b34696-1be6-4ee8-8161-0b3ba8119191-kube-api-access-79hk7\") pod \"dnsmasq-dns-58dd9ff6bc-2zk48\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.447382 4721 generic.go:334] "Generic (PLEG): container finished" podID="8ac81a5a-78b3-43c6-964f-300e126ba4ca" containerID="7bd63c1583bd53bad529b278e0de22f5f62fb284dcbd8a13c6e4d933b1fd074b" exitCode=0 Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.448429 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ac81a5a-78b3-43c6-964f-300e126ba4ca","Type":"ContainerDied","Data":"7bd63c1583bd53bad529b278e0de22f5f62fb284dcbd8a13c6e4d933b1fd074b"} Jan 28 18:55:18 crc kubenswrapper[4721]: I0128 18:55:18.536833 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:19 crc kubenswrapper[4721]: I0128 18:55:19.279838 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-2zk48"] Jan 28 18:55:19 crc kubenswrapper[4721]: W0128 18:55:19.293519 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24b34696_1be6_4ee8_8161_0b3ba8119191.slice/crio-30d1bbb05937181306468aec6d00e032c3102725b4d4477bb994f0e3b061e9f4 WatchSource:0}: Error finding container 30d1bbb05937181306468aec6d00e032c3102725b4d4477bb994f0e3b061e9f4: Status 404 returned error can't find the container with id 30d1bbb05937181306468aec6d00e032c3102725b4d4477bb994f0e3b061e9f4 Jan 28 18:55:19 crc kubenswrapper[4721]: I0128 18:55:19.484534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" event={"ID":"24b34696-1be6-4ee8-8161-0b3ba8119191","Type":"ContainerStarted","Data":"30d1bbb05937181306468aec6d00e032c3102725b4d4477bb994f0e3b061e9f4"} Jan 28 18:55:19 crc kubenswrapper[4721]: I0128 18:55:19.493975 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="dnsmasq-dns" containerID="cri-o://8912526981c738c5b40159484558a765f41d6b2ce73d8317fd6a460b86035e4f" gracePeriod=10 Jan 28 18:55:19 crc kubenswrapper[4721]: I0128 18:55:19.494395 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ac81a5a-78b3-43c6-964f-300e126ba4ca","Type":"ContainerStarted","Data":"5572c6e3e70d09c24dd72d1665b6a687edf53aca588eec8e40ca550a228bb420"} Jan 28 18:55:20 crc kubenswrapper[4721]: I0128 18:55:20.506849 4721 generic.go:334] "Generic (PLEG): container finished" podID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerID="a4e7167b8cd1523d74b3cc700d8fe1f1e1c1107506c5114057c9e29a9517a909" exitCode=0 Jan 28 18:55:20 crc kubenswrapper[4721]: I0128 18:55:20.506918 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" event={"ID":"24b34696-1be6-4ee8-8161-0b3ba8119191","Type":"ContainerDied","Data":"a4e7167b8cd1523d74b3cc700d8fe1f1e1c1107506c5114057c9e29a9517a909"} Jan 28 18:55:20 crc kubenswrapper[4721]: I0128 18:55:20.510014 4721 generic.go:334] "Generic (PLEG): container finished" podID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerID="8912526981c738c5b40159484558a765f41d6b2ce73d8317fd6a460b86035e4f" exitCode=0 Jan 28 18:55:20 crc kubenswrapper[4721]: I0128 18:55:20.510405 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" event={"ID":"da543bfe-af20-47a0-b1a2-0bd8b36a94ec","Type":"ContainerDied","Data":"8912526981c738c5b40159484558a765f41d6b2ce73d8317fd6a460b86035e4f"} Jan 28 18:55:20 crc kubenswrapper[4721]: I0128 18:55:20.512274 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j284c" event={"ID":"7b2b2524-50e6-4d73-bdb9-8770b642481e","Type":"ContainerStarted","Data":"8ef3f876c4ca4aa8d6bb644b809179eb7dd42addde04ed2b033309027a6a0c2b"} Jan 28 18:55:20 crc kubenswrapper[4721]: I0128 18:55:20.560850 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-j284c" podStartSLOduration=5.00864392 podStartE2EDuration="44.560815528s" podCreationTimestamp="2026-01-28 18:54:36 +0000 UTC" firstStartedPulling="2026-01-28 18:54:39.068622504 +0000 UTC m=+1244.793928064" lastFinishedPulling="2026-01-28 18:55:18.620794112 +0000 UTC m=+1284.346099672" observedRunningTime="2026-01-28 18:55:20.551541005 +0000 UTC m=+1286.276846565" watchObservedRunningTime="2026-01-28 18:55:20.560815528 +0000 UTC m=+1286.286121108" Jan 28 18:55:21 crc kubenswrapper[4721]: I0128 18:55:21.532730 4721 generic.go:334] "Generic (PLEG): container finished" podID="58139529-dfb2-4f83-bd8f-ed6645cf0e6d" containerID="4bb868c782027b9450929d28db2ba013267b613e7f87e574cbc9a843f19d54ac" exitCode=0 Jan 28 18:55:21 crc kubenswrapper[4721]: I0128 18:55:21.547341 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76fdw" event={"ID":"58139529-dfb2-4f83-bd8f-ed6645cf0e6d","Type":"ContainerDied","Data":"4bb868c782027b9450929d28db2ba013267b613e7f87e574cbc9a843f19d54ac"} Jan 28 18:55:28 crc kubenswrapper[4721]: I0128 18:55:28.368764 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Jan 28 18:55:29 crc kubenswrapper[4721]: I0128 18:55:29.617311 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ac81a5a-78b3-43c6-964f-300e126ba4ca","Type":"ContainerStarted","Data":"39d788e7e131eab9316acc6eea323735d5ca9fdafc307980e4de5bdd925432db"} Jan 28 18:55:31 crc kubenswrapper[4721]: I0128 18:55:31.225377 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:55:31 crc kubenswrapper[4721]: I0128 18:55:31.225735 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.354533 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.529498 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-scripts\") pod \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.530106 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-credential-keys\") pod \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.530827 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-combined-ca-bundle\") pod \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.530992 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmxxg\" (UniqueName: \"kubernetes.io/projected/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-kube-api-access-cmxxg\") pod \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.531147 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-fernet-keys\") pod \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.531571 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-config-data\") pod \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\" (UID: \"58139529-dfb2-4f83-bd8f-ed6645cf0e6d\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.537425 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "58139529-dfb2-4f83-bd8f-ed6645cf0e6d" (UID: "58139529-dfb2-4f83-bd8f-ed6645cf0e6d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.537441 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "58139529-dfb2-4f83-bd8f-ed6645cf0e6d" (UID: "58139529-dfb2-4f83-bd8f-ed6645cf0e6d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.538560 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-scripts" (OuterVolumeSpecName: "scripts") pod "58139529-dfb2-4f83-bd8f-ed6645cf0e6d" (UID: "58139529-dfb2-4f83-bd8f-ed6645cf0e6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.538977 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-kube-api-access-cmxxg" (OuterVolumeSpecName: "kube-api-access-cmxxg") pod "58139529-dfb2-4f83-bd8f-ed6645cf0e6d" (UID: "58139529-dfb2-4f83-bd8f-ed6645cf0e6d"). InnerVolumeSpecName "kube-api-access-cmxxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.564056 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58139529-dfb2-4f83-bd8f-ed6645cf0e6d" (UID: "58139529-dfb2-4f83-bd8f-ed6645cf0e6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.568621 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-config-data" (OuterVolumeSpecName: "config-data") pod "58139529-dfb2-4f83-bd8f-ed6645cf0e6d" (UID: "58139529-dfb2-4f83-bd8f-ed6645cf0e6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.634484 4721 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.634525 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.634535 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmxxg\" (UniqueName: \"kubernetes.io/projected/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-kube-api-access-cmxxg\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.634546 4721 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.634555 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.634563 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58139529-dfb2-4f83-bd8f-ed6645cf0e6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.646803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-76fdw" event={"ID":"58139529-dfb2-4f83-bd8f-ed6645cf0e6d","Type":"ContainerDied","Data":"5ef3734e2c90c47606cce721b6648ef517472c9530a40e159a03fec66e5a05ac"} Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.647039 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef3734e2c90c47606cce721b6648ef517472c9530a40e159a03fec66e5a05ac" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.647105 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-76fdw" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.842951 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:32 crc kubenswrapper[4721]: E0128 18:55:32.912082 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 28 18:55:32 crc kubenswrapper[4721]: E0128 18:55:32.912502 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh58bh654h68dh547h5cbh5c9h687h5b6h5c4h57h665h686h689h5hd7h5fchdch78hcdh9bh695h6bh54bh589h5f5hc7h559h556hd8h54dhbbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xmnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a423fddb-4a71-416a-8138-63d58b0350fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.940055 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-sb\") pod \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.940207 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-config\") pod \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.940409 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-nb\") pod \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.940600 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcbb\" (UniqueName: \"kubernetes.io/projected/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-kube-api-access-lfcbb\") pod \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.940639 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-dns-svc\") pod \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\" (UID: \"da543bfe-af20-47a0-b1a2-0bd8b36a94ec\") " Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.952816 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-kube-api-access-lfcbb" (OuterVolumeSpecName: "kube-api-access-lfcbb") pod "da543bfe-af20-47a0-b1a2-0bd8b36a94ec" (UID: "da543bfe-af20-47a0-b1a2-0bd8b36a94ec"). InnerVolumeSpecName "kube-api-access-lfcbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.994149 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da543bfe-af20-47a0-b1a2-0bd8b36a94ec" (UID: "da543bfe-af20-47a0-b1a2-0bd8b36a94ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.995403 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da543bfe-af20-47a0-b1a2-0bd8b36a94ec" (UID: "da543bfe-af20-47a0-b1a2-0bd8b36a94ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:32 crc kubenswrapper[4721]: I0128 18:55:32.999758 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-config" (OuterVolumeSpecName: "config") pod "da543bfe-af20-47a0-b1a2-0bd8b36a94ec" (UID: "da543bfe-af20-47a0-b1a2-0bd8b36a94ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.001089 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da543bfe-af20-47a0-b1a2-0bd8b36a94ec" (UID: "da543bfe-af20-47a0-b1a2-0bd8b36a94ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.043408 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.043448 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.043469 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcbb\" (UniqueName: \"kubernetes.io/projected/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-kube-api-access-lfcbb\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.043480 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.043494 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da543bfe-af20-47a0-b1a2-0bd8b36a94ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.371012 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.448277 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-76fdw"] Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.459532 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-76fdw"] Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.565296 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58139529-dfb2-4f83-bd8f-ed6645cf0e6d" path="/var/lib/kubelet/pods/58139529-dfb2-4f83-bd8f-ed6645cf0e6d/volumes" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.566188 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-77gjx"] Jan 28 18:55:33 crc kubenswrapper[4721]: E0128 18:55:33.566487 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="dnsmasq-dns" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.566504 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="dnsmasq-dns" Jan 28 18:55:33 crc kubenswrapper[4721]: E0128 18:55:33.566518 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58139529-dfb2-4f83-bd8f-ed6645cf0e6d" containerName="keystone-bootstrap" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.566524 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="58139529-dfb2-4f83-bd8f-ed6645cf0e6d" containerName="keystone-bootstrap" Jan 28 18:55:33 crc kubenswrapper[4721]: E0128 18:55:33.566536 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="init" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.566542 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="init" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.566759 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" containerName="dnsmasq-dns" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.566771 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="58139529-dfb2-4f83-bd8f-ed6645cf0e6d" containerName="keystone-bootstrap" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.567429 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-77gjx"] Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.567545 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.570586 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.570990 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.571712 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gfv9p" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.571747 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.659292 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" event={"ID":"da543bfe-af20-47a0-b1a2-0bd8b36a94ec","Type":"ContainerDied","Data":"2849e273c7fb7b907d2424a8e420b65823e17bfb993a950cdd083d479988c9db"} Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.659391 4721 scope.go:117] "RemoveContainer" containerID="8912526981c738c5b40159484558a765f41d6b2ce73d8317fd6a460b86035e4f" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.659458 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-5lt72" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.665205 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-config-data\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.665362 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqzxb\" (UniqueName: \"kubernetes.io/projected/19551c06-75df-4db7-805a-b7efc5e72018-kube-api-access-qqzxb\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.665439 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-scripts\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.665720 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-fernet-keys\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.665874 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-credential-keys\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.665945 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-combined-ca-bundle\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.702014 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-5lt72"] Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.720951 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-5lt72"] Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.767897 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-credential-keys\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.767972 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-combined-ca-bundle\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.768025 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-config-data\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.768101 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqzxb\" (UniqueName: \"kubernetes.io/projected/19551c06-75df-4db7-805a-b7efc5e72018-kube-api-access-qqzxb\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.768128 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-scripts\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.768248 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-fernet-keys\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.773243 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-fernet-keys\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.773285 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-credential-keys\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.773445 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-config-data\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.773803 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-scripts\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.788782 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-combined-ca-bundle\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.789681 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqzxb\" (UniqueName: \"kubernetes.io/projected/19551c06-75df-4db7-805a-b7efc5e72018-kube-api-access-qqzxb\") pod \"keystone-bootstrap-77gjx\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:33 crc kubenswrapper[4721]: I0128 18:55:33.891938 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:55:35 crc kubenswrapper[4721]: I0128 18:55:35.542571 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da543bfe-af20-47a0-b1a2-0bd8b36a94ec" path="/var/lib/kubelet/pods/da543bfe-af20-47a0-b1a2-0bd8b36a94ec/volumes" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.190915 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.191709 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-spxh4_openstack(b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.192983 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-spxh4" podUID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.539090 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.539308 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlls6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4rqtv_openstack(5a0d808e-8db2-4d8b-a02e-5f04c991fb44): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.540529 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4rqtv" podUID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.915129 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-spxh4" podUID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" Jan 28 18:55:54 crc kubenswrapper[4721]: E0128 18:55:54.915491 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-4rqtv" podUID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" Jan 28 18:55:56 crc kubenswrapper[4721]: I0128 18:55:56.797227 4721 scope.go:117] "RemoveContainer" containerID="20209350852a27344ac1486d2db32118ee3a2aa855ac94f2a4973d94261356e0" Jan 28 18:55:57 crc kubenswrapper[4721]: I0128 18:55:57.961831 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" event={"ID":"24b34696-1be6-4ee8-8161-0b3ba8119191","Type":"ContainerStarted","Data":"ab3ae24f436a6b7f1f92cc7c1ad7abbfb9c2b71a4bc5c792c127d2cdcfa8665f"} Jan 28 18:55:57 crc kubenswrapper[4721]: I0128 18:55:57.962384 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:55:57 crc kubenswrapper[4721]: I0128 18:55:57.994259 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" podStartSLOduration=39.994232138 podStartE2EDuration="39.994232138s" podCreationTimestamp="2026-01-28 18:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:55:57.987133014 +0000 UTC m=+1323.712438574" watchObservedRunningTime="2026-01-28 18:55:57.994232138 +0000 UTC m=+1323.719537698" Jan 28 18:55:59 crc kubenswrapper[4721]: E0128 18:55:59.904283 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 28 18:55:59 crc kubenswrapper[4721]: E0128 18:55:59.904693 4721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 28 18:55:59 crc kubenswrapper[4721]: E0128 18:55:59.904928 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vch8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qbnjm_openstack(6d4d13db-d2ce-4194-841a-c50b85a2887c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:55:59 crc kubenswrapper[4721]: E0128 18:55:59.907285 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-qbnjm" podUID="6d4d13db-d2ce-4194-841a-c50b85a2887c" Jan 28 18:55:59 crc kubenswrapper[4721]: E0128 18:55:59.988821 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-qbnjm" podUID="6d4d13db-d2ce-4194-841a-c50b85a2887c" Jan 28 18:56:00 crc kubenswrapper[4721]: E0128 18:56:00.195424 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 28 18:56:00 crc kubenswrapper[4721]: E0128 18:56:00.195689 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh58bh654h68dh547h5cbh5c9h687h5b6h5c4h57h665h686h689h5hd7h5fchdch78hcdh9bh695h6bh54bh589h5f5hc7h559h556hd8h54dhbbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xmnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a423fddb-4a71-416a-8138-63d58b0350fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:56:00 crc kubenswrapper[4721]: I0128 18:56:00.206679 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:56:00 crc kubenswrapper[4721]: I0128 18:56:00.747113 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-77gjx"] Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.008465 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gdk5z" event={"ID":"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9","Type":"ContainerStarted","Data":"2422a3e54852f47cb7dc219e614addb9764635f6263a0de9cc11095c91ee3b2d"} Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.012925 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ac81a5a-78b3-43c6-964f-300e126ba4ca","Type":"ContainerStarted","Data":"5044443ffff781a14da25703d174773649053b2f7ad0a92487e650b47e2fa605"} Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.015646 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-77gjx" event={"ID":"19551c06-75df-4db7-805a-b7efc5e72018","Type":"ContainerStarted","Data":"c84438f78b97b3d8c2a59cf0fc15f9434dc031f74685e98e1de2d780d53e414a"} Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.036151 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gdk5z" podStartSLOduration=10.90827799 podStartE2EDuration="49.036119541s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="2026-01-28 18:55:14.65478065 +0000 UTC m=+1280.380086210" lastFinishedPulling="2026-01-28 18:55:52.782622201 +0000 UTC m=+1318.507927761" observedRunningTime="2026-01-28 18:56:01.025316981 +0000 UTC m=+1326.750622561" watchObservedRunningTime="2026-01-28 18:56:01.036119541 +0000 UTC m=+1326.761425101" Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.059709 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=56.059682314 podStartE2EDuration="56.059682314s" podCreationTimestamp="2026-01-28 18:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:01.053078096 +0000 UTC m=+1326.778383656" watchObservedRunningTime="2026-01-28 18:56:01.059682314 +0000 UTC m=+1326.784987874" Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.060676 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.224833 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.224912 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.224968 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.226364 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"550b2d16893b3820a2b08c43cf1c1d92f4cff5c63dda2753410f76f8e772711f"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:56:01 crc kubenswrapper[4721]: I0128 18:56:01.226440 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://550b2d16893b3820a2b08c43cf1c1d92f4cff5c63dda2753410f76f8e772711f" gracePeriod=600 Jan 28 18:56:02 crc kubenswrapper[4721]: I0128 18:56:02.041155 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="550b2d16893b3820a2b08c43cf1c1d92f4cff5c63dda2753410f76f8e772711f" exitCode=0 Jan 28 18:56:02 crc kubenswrapper[4721]: I0128 18:56:02.041600 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"550b2d16893b3820a2b08c43cf1c1d92f4cff5c63dda2753410f76f8e772711f"} Jan 28 18:56:02 crc kubenswrapper[4721]: I0128 18:56:02.041633 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070"} Jan 28 18:56:02 crc kubenswrapper[4721]: I0128 18:56:02.041650 4721 scope.go:117] "RemoveContainer" containerID="cf577cfdc0b7c29bec411ba83a64318b81b8ea16d7ec474c8974a1dbea166b1d" Jan 28 18:56:02 crc kubenswrapper[4721]: I0128 18:56:02.055222 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-77gjx" event={"ID":"19551c06-75df-4db7-805a-b7efc5e72018","Type":"ContainerStarted","Data":"bb47fdfff808823d5320c16e0aa4f39ad1c5fe30bac981c900e0e8bce17f5d24"} Jan 28 18:56:02 crc kubenswrapper[4721]: I0128 18:56:02.088237 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-77gjx" podStartSLOduration=29.088217131 podStartE2EDuration="29.088217131s" podCreationTimestamp="2026-01-28 18:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:02.080475357 +0000 UTC m=+1327.805780937" watchObservedRunningTime="2026-01-28 18:56:02.088217131 +0000 UTC m=+1327.813522691" Jan 28 18:56:03 crc kubenswrapper[4721]: I0128 18:56:03.545108 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:56:03 crc kubenswrapper[4721]: I0128 18:56:03.633229 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xxb2g"] Jan 28 18:56:03 crc kubenswrapper[4721]: I0128 18:56:03.633568 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-xxb2g" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerName="dnsmasq-dns" containerID="cri-o://bf4db4b5723a0cce5ab54a03821e9849b73271571bbc2b763dbe4c63e29bbb93" gracePeriod=10 Jan 28 18:56:04 crc kubenswrapper[4721]: I0128 18:56:04.091251 4721 generic.go:334] "Generic (PLEG): container finished" podID="7b2b2524-50e6-4d73-bdb9-8770b642481e" containerID="8ef3f876c4ca4aa8d6bb644b809179eb7dd42addde04ed2b033309027a6a0c2b" exitCode=0 Jan 28 18:56:04 crc kubenswrapper[4721]: I0128 18:56:04.091331 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j284c" event={"ID":"7b2b2524-50e6-4d73-bdb9-8770b642481e","Type":"ContainerDied","Data":"8ef3f876c4ca4aa8d6bb644b809179eb7dd42addde04ed2b033309027a6a0c2b"} Jan 28 18:56:04 crc kubenswrapper[4721]: I0128 18:56:04.096428 4721 generic.go:334] "Generic (PLEG): container finished" podID="4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" containerID="2422a3e54852f47cb7dc219e614addb9764635f6263a0de9cc11095c91ee3b2d" exitCode=0 Jan 28 18:56:04 crc kubenswrapper[4721]: I0128 18:56:04.096509 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gdk5z" event={"ID":"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9","Type":"ContainerDied","Data":"2422a3e54852f47cb7dc219e614addb9764635f6263a0de9cc11095c91ee3b2d"} Jan 28 18:56:04 crc kubenswrapper[4721]: I0128 18:56:04.098864 4721 generic.go:334] "Generic (PLEG): container finished" podID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerID="bf4db4b5723a0cce5ab54a03821e9849b73271571bbc2b763dbe4c63e29bbb93" exitCode=0 Jan 28 18:56:04 crc kubenswrapper[4721]: I0128 18:56:04.098915 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xxb2g" event={"ID":"69738eb9-4e39-4dae-9c2e-4f0f0e214938","Type":"ContainerDied","Data":"bf4db4b5723a0cce5ab54a03821e9849b73271571bbc2b763dbe4c63e29bbb93"} Jan 28 18:56:05 crc kubenswrapper[4721]: E0128 18:56:05.421769 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19551c06_75df_4db7_805a_b7efc5e72018.slice/crio-bb47fdfff808823d5320c16e0aa4f39ad1c5fe30bac981c900e0e8bce17f5d24.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.060660 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.071631 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.154364 4721 generic.go:334] "Generic (PLEG): container finished" podID="19551c06-75df-4db7-805a-b7efc5e72018" containerID="bb47fdfff808823d5320c16e0aa4f39ad1c5fe30bac981c900e0e8bce17f5d24" exitCode=0 Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.154452 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-77gjx" event={"ID":"19551c06-75df-4db7-805a-b7efc5e72018","Type":"ContainerDied","Data":"bb47fdfff808823d5320c16e0aa4f39ad1c5fe30bac981c900e0e8bce17f5d24"} Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.157637 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j284c" event={"ID":"7b2b2524-50e6-4d73-bdb9-8770b642481e","Type":"ContainerDied","Data":"e95ccae744258def981dbab17f77366df750160a9e686762c1f8fe4eb373774c"} Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.157700 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95ccae744258def981dbab17f77366df750160a9e686762c1f8fe4eb373774c" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.160975 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gdk5z" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.166334 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gdk5z" event={"ID":"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9","Type":"ContainerDied","Data":"0a663b493fae0ba462d64aee50e91118db9f01bc55e347f17276632bddf90ef2"} Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.166384 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a663b493fae0ba462d64aee50e91118db9f01bc55e347f17276632bddf90ef2" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.173078 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.175203 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j284c" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323030 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-combined-ca-bundle\") pod \"7b2b2524-50e6-4d73-bdb9-8770b642481e\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323128 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-db-sync-config-data\") pod \"7b2b2524-50e6-4d73-bdb9-8770b642481e\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-scripts\") pod \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323335 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h95x\" (UniqueName: \"kubernetes.io/projected/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-kube-api-access-9h95x\") pod \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323375 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-config-data\") pod \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323481 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-logs\") pod \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323556 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-combined-ca-bundle\") pod \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\" (UID: \"4ceee9a0-8f8f-46cc-a090-f31b224fe8a9\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323831 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-config-data\") pod \"7b2b2524-50e6-4d73-bdb9-8770b642481e\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323906 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-logs" (OuterVolumeSpecName: "logs") pod "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" (UID: "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.323961 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxl8v\" (UniqueName: \"kubernetes.io/projected/7b2b2524-50e6-4d73-bdb9-8770b642481e-kube-api-access-mxl8v\") pod \"7b2b2524-50e6-4d73-bdb9-8770b642481e\" (UID: \"7b2b2524-50e6-4d73-bdb9-8770b642481e\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.325568 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.330374 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-scripts" (OuterVolumeSpecName: "scripts") pod "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" (UID: "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.331618 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2b2524-50e6-4d73-bdb9-8770b642481e-kube-api-access-mxl8v" (OuterVolumeSpecName: "kube-api-access-mxl8v") pod "7b2b2524-50e6-4d73-bdb9-8770b642481e" (UID: "7b2b2524-50e6-4d73-bdb9-8770b642481e"). InnerVolumeSpecName "kube-api-access-mxl8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.335546 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-kube-api-access-9h95x" (OuterVolumeSpecName: "kube-api-access-9h95x") pod "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" (UID: "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9"). InnerVolumeSpecName "kube-api-access-9h95x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.356599 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7b2b2524-50e6-4d73-bdb9-8770b642481e" (UID: "7b2b2524-50e6-4d73-bdb9-8770b642481e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.370994 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" (UID: "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.389722 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-config-data" (OuterVolumeSpecName: "config-data") pod "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" (UID: "4ceee9a0-8f8f-46cc-a090-f31b224fe8a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.433400 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b2b2524-50e6-4d73-bdb9-8770b642481e" (UID: "7b2b2524-50e6-4d73-bdb9-8770b642481e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436014 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436040 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h95x\" (UniqueName: \"kubernetes.io/projected/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-kube-api-access-9h95x\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436056 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436068 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436079 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxl8v\" (UniqueName: \"kubernetes.io/projected/7b2b2524-50e6-4d73-bdb9-8770b642481e-kube-api-access-mxl8v\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436089 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.436099 4721 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.456423 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-config-data" (OuterVolumeSpecName: "config-data") pod "7b2b2524-50e6-4d73-bdb9-8770b642481e" (UID: "7b2b2524-50e6-4d73-bdb9-8770b642481e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.538250 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2b2524-50e6-4d73-bdb9-8770b642481e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.544152 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.639381 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs857\" (UniqueName: \"kubernetes.io/projected/69738eb9-4e39-4dae-9c2e-4f0f0e214938-kube-api-access-qs857\") pod \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.640193 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-config\") pod \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.640328 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-dns-svc\") pod \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.640509 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-sb\") pod \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.640626 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-nb\") pod \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\" (UID: \"69738eb9-4e39-4dae-9c2e-4f0f0e214938\") " Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.644532 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69738eb9-4e39-4dae-9c2e-4f0f0e214938-kube-api-access-qs857" (OuterVolumeSpecName: "kube-api-access-qs857") pod "69738eb9-4e39-4dae-9c2e-4f0f0e214938" (UID: "69738eb9-4e39-4dae-9c2e-4f0f0e214938"). InnerVolumeSpecName "kube-api-access-qs857". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.646955 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs857\" (UniqueName: \"kubernetes.io/projected/69738eb9-4e39-4dae-9c2e-4f0f0e214938-kube-api-access-qs857\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.722295 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-config" (OuterVolumeSpecName: "config") pod "69738eb9-4e39-4dae-9c2e-4f0f0e214938" (UID: "69738eb9-4e39-4dae-9c2e-4f0f0e214938"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.732799 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69738eb9-4e39-4dae-9c2e-4f0f0e214938" (UID: "69738eb9-4e39-4dae-9c2e-4f0f0e214938"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.735540 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69738eb9-4e39-4dae-9c2e-4f0f0e214938" (UID: "69738eb9-4e39-4dae-9c2e-4f0f0e214938"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.748972 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.749007 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.749018 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.754609 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69738eb9-4e39-4dae-9c2e-4f0f0e214938" (UID: "69738eb9-4e39-4dae-9c2e-4f0f0e214938"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:06 crc kubenswrapper[4721]: I0128 18:56:06.850898 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69738eb9-4e39-4dae-9c2e-4f0f0e214938-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.175859 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a423fddb-4a71-416a-8138-63d58b0350fb","Type":"ContainerStarted","Data":"e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b"} Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.177761 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xxb2g" event={"ID":"69738eb9-4e39-4dae-9c2e-4f0f0e214938","Type":"ContainerDied","Data":"315de4d90767ff47678ac5734a8c6e4bbd69487ffbda1a8c60efadb1a15ba766"} Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.177804 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j284c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.177808 4721 scope.go:117] "RemoveContainer" containerID="bf4db4b5723a0cce5ab54a03821e9849b73271571bbc2b763dbe4c63e29bbb93" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.177811 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gdk5z" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.177774 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xxb2g" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.231077 4721 scope.go:117] "RemoveContainer" containerID="fbe2882cce713417850b2a070e822a818e0dad47466e5ed8f599f66fa217dacb" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.266238 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xxb2g"] Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.274794 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xxb2g"] Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.389771 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-649bf84c5b-p55hh"] Jan 28 18:56:07 crc kubenswrapper[4721]: E0128 18:56:07.390420 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" containerName="placement-db-sync" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390447 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" containerName="placement-db-sync" Jan 28 18:56:07 crc kubenswrapper[4721]: E0128 18:56:07.390479 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2b2524-50e6-4d73-bdb9-8770b642481e" containerName="glance-db-sync" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390488 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2b2524-50e6-4d73-bdb9-8770b642481e" containerName="glance-db-sync" Jan 28 18:56:07 crc kubenswrapper[4721]: E0128 18:56:07.390515 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerName="init" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390536 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerName="init" Jan 28 18:56:07 crc kubenswrapper[4721]: E0128 18:56:07.390552 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerName="dnsmasq-dns" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390560 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerName="dnsmasq-dns" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390813 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" containerName="dnsmasq-dns" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390835 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" containerName="placement-db-sync" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.390847 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2b2524-50e6-4d73-bdb9-8770b642481e" containerName="glance-db-sync" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.392038 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.395014 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.395533 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.395892 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.395975 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rv5sq" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.395899 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.401183 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-649bf84c5b-p55hh"] Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.483491 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-scripts\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.483675 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-config-data\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.483762 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbt6\" (UniqueName: \"kubernetes.io/projected/65d3ed26-a43e-491f-8170-7d65eb15bd4f-kube-api-access-2tbt6\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.483845 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65d3ed26-a43e-491f-8170-7d65eb15bd4f-logs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.483927 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-internal-tls-certs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.484010 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-public-tls-certs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.484032 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-combined-ca-bundle\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.566827 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69738eb9-4e39-4dae-9c2e-4f0f0e214938" path="/var/lib/kubelet/pods/69738eb9-4e39-4dae-9c2e-4f0f0e214938/volumes" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586060 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-config-data\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586136 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tbt6\" (UniqueName: \"kubernetes.io/projected/65d3ed26-a43e-491f-8170-7d65eb15bd4f-kube-api-access-2tbt6\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586194 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65d3ed26-a43e-491f-8170-7d65eb15bd4f-logs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586240 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-internal-tls-certs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586280 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-public-tls-certs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586297 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-combined-ca-bundle\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.586347 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-scripts\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.587397 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65d3ed26-a43e-491f-8170-7d65eb15bd4f-logs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.594044 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-scripts\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.597560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-config-data\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.597969 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-public-tls-certs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.611760 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-internal-tls-certs\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.612795 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-combined-ca-bundle\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.620080 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tbt6\" (UniqueName: \"kubernetes.io/projected/65d3ed26-a43e-491f-8170-7d65eb15bd4f-kube-api-access-2tbt6\") pod \"placement-649bf84c5b-p55hh\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.711301 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.717746 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.784799 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4g9c"] Jan 28 18:56:07 crc kubenswrapper[4721]: E0128 18:56:07.785448 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19551c06-75df-4db7-805a-b7efc5e72018" containerName="keystone-bootstrap" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.785467 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="19551c06-75df-4db7-805a-b7efc5e72018" containerName="keystone-bootstrap" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.785738 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="19551c06-75df-4db7-805a-b7efc5e72018" containerName="keystone-bootstrap" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.787593 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.796694 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4g9c"] Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.797465 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqzxb\" (UniqueName: \"kubernetes.io/projected/19551c06-75df-4db7-805a-b7efc5e72018-kube-api-access-qqzxb\") pod \"19551c06-75df-4db7-805a-b7efc5e72018\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.797643 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-fernet-keys\") pod \"19551c06-75df-4db7-805a-b7efc5e72018\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.797715 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-config-data\") pod \"19551c06-75df-4db7-805a-b7efc5e72018\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.797853 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-scripts\") pod \"19551c06-75df-4db7-805a-b7efc5e72018\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.797876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-combined-ca-bundle\") pod \"19551c06-75df-4db7-805a-b7efc5e72018\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.797911 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-credential-keys\") pod \"19551c06-75df-4db7-805a-b7efc5e72018\" (UID: \"19551c06-75df-4db7-805a-b7efc5e72018\") " Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.805065 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "19551c06-75df-4db7-805a-b7efc5e72018" (UID: "19551c06-75df-4db7-805a-b7efc5e72018"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.813078 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-scripts" (OuterVolumeSpecName: "scripts") pod "19551c06-75df-4db7-805a-b7efc5e72018" (UID: "19551c06-75df-4db7-805a-b7efc5e72018"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.819523 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19551c06-75df-4db7-805a-b7efc5e72018-kube-api-access-qqzxb" (OuterVolumeSpecName: "kube-api-access-qqzxb") pod "19551c06-75df-4db7-805a-b7efc5e72018" (UID: "19551c06-75df-4db7-805a-b7efc5e72018"). InnerVolumeSpecName "kube-api-access-qqzxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.824285 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "19551c06-75df-4db7-805a-b7efc5e72018" (UID: "19551c06-75df-4db7-805a-b7efc5e72018"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.845345 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19551c06-75df-4db7-805a-b7efc5e72018" (UID: "19551c06-75df-4db7-805a-b7efc5e72018"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.859531 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-config-data" (OuterVolumeSpecName: "config-data") pod "19551c06-75df-4db7-805a-b7efc5e72018" (UID: "19551c06-75df-4db7-805a-b7efc5e72018"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903259 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903345 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903402 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903442 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmdj\" (UniqueName: \"kubernetes.io/projected/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-kube-api-access-kwmdj\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903506 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903545 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-config\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903644 4721 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903661 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903675 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903686 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903698 4721 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/19551c06-75df-4db7-805a-b7efc5e72018-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:07 crc kubenswrapper[4721]: I0128 18:56:07.903710 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqzxb\" (UniqueName: \"kubernetes.io/projected/19551c06-75df-4db7-805a-b7efc5e72018-kube-api-access-qqzxb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.007481 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.007909 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmdj\" (UniqueName: \"kubernetes.io/projected/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-kube-api-access-kwmdj\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.007968 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.008003 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-config\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.008060 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.008111 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.008662 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.009001 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.009202 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-config\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.009412 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.009554 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.026630 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmdj\" (UniqueName: \"kubernetes.io/projected/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-kube-api-access-kwmdj\") pod \"dnsmasq-dns-785d8bcb8c-v4g9c\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.136359 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.191317 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-77gjx" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.193061 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-77gjx" event={"ID":"19551c06-75df-4db7-805a-b7efc5e72018","Type":"ContainerDied","Data":"c84438f78b97b3d8c2a59cf0fc15f9434dc031f74685e98e1de2d780d53e414a"} Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.193124 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c84438f78b97b3d8c2a59cf0fc15f9434dc031f74685e98e1de2d780d53e414a" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.333608 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-649bf84c5b-p55hh"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.383041 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7fccf8d9d-jqxpt"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.384639 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.386513 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.386594 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gfv9p" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.392038 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fccf8d9d-jqxpt"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.392704 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.393034 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.395459 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.399983 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.521403 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-credential-keys\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.521466 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-public-tls-certs\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.521503 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-scripts\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.521897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-fernet-keys\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.521979 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-combined-ca-bundle\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.522087 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkp9p\" (UniqueName: \"kubernetes.io/projected/b596f4de-be4e-4c2a-8524-fca9afc03775-kube-api-access-kkp9p\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.522116 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-config-data\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.522282 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-internal-tls-certs\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.609704 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.611898 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.618359 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dfbkx" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.618661 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.618965 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.625092 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkp9p\" (UniqueName: \"kubernetes.io/projected/b596f4de-be4e-4c2a-8524-fca9afc03775-kube-api-access-kkp9p\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.627685 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-config-data\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.627981 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-internal-tls-certs\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.628126 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-credential-keys\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.628276 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-public-tls-certs\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.628423 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-scripts\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.628696 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-fernet-keys\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.628793 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-combined-ca-bundle\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.639690 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-internal-tls-certs\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.639993 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-credential-keys\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.640320 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-combined-ca-bundle\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.642745 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-config-data\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.644491 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-scripts\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.645129 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-fernet-keys\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.647743 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b596f4de-be4e-4c2a-8524-fca9afc03775-public-tls-certs\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.658758 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.669726 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkp9p\" (UniqueName: \"kubernetes.io/projected/b596f4de-be4e-4c2a-8524-fca9afc03775-kube-api-access-kkp9p\") pod \"keystone-7fccf8d9d-jqxpt\" (UID: \"b596f4de-be4e-4c2a-8524-fca9afc03775\") " pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: W0128 18:56:08.719510 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadfa1a56_6e36_42b6_86e6_1cf51f6e49cb.slice/crio-e4ddbbd440c5cac5640c9685847696427139dbd44fca2a6180c32ebaf8bef878 WatchSource:0}: Error finding container e4ddbbd440c5cac5640c9685847696427139dbd44fca2a6180c32ebaf8bef878: Status 404 returned error can't find the container with id e4ddbbd440c5cac5640c9685847696427139dbd44fca2a6180c32ebaf8bef878 Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.720934 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4g9c"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.730654 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.730958 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.731103 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.731306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-logs\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.731515 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppn8\" (UniqueName: \"kubernetes.io/projected/6931e142-6b1b-43e9-9515-2bdcfd78e69a-kube-api-access-bppn8\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.731612 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-scripts\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.731808 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-config-data\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.789560 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835470 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835600 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835690 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835782 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-logs\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835837 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bppn8\" (UniqueName: \"kubernetes.io/projected/6931e142-6b1b-43e9-9515-2bdcfd78e69a-kube-api-access-bppn8\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-scripts\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.835972 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-config-data\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.836059 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.836667 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-logs\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.839114 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.839296 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cbf5faf63d16a5d12a5e9b11b66b2cf989de626a136bdd39a47c0348964ea03b/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.841051 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-scripts\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.841754 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.841924 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-config-data\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.855380 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppn8\" (UniqueName: \"kubernetes.io/projected/6931e142-6b1b-43e9-9515-2bdcfd78e69a-kube-api-access-bppn8\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.877646 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.964772 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.973547 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.977836 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:08 crc kubenswrapper[4721]: I0128 18:56:08.982995 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.043793 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.067928 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-logs\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.068100 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.068132 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtf6\" (UniqueName: \"kubernetes.io/projected/f645046a-6c22-410b-86a6-9f9aedff30db-kube-api-access-jmtf6\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.068283 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.068327 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.068358 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.068498 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.170621 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.170753 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.170780 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.172758 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.173234 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.173440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-logs\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.173766 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.173810 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtf6\" (UniqueName: \"kubernetes.io/projected/f645046a-6c22-410b-86a6-9f9aedff30db-kube-api-access-jmtf6\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.173856 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-logs\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.178140 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.181197 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.182202 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.183888 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.183947 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e0fb27f96ed2a0ff9a552b58e2db95cb7dc681ae95f2f3784ea1f011e1d9aaa2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.201359 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtf6\" (UniqueName: \"kubernetes.io/projected/f645046a-6c22-410b-86a6-9f9aedff30db-kube-api-access-jmtf6\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.209827 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" event={"ID":"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb","Type":"ContainerStarted","Data":"e4ddbbd440c5cac5640c9685847696427139dbd44fca2a6180c32ebaf8bef878"} Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.224804 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-649bf84c5b-p55hh" event={"ID":"65d3ed26-a43e-491f-8170-7d65eb15bd4f","Type":"ContainerStarted","Data":"de1a9183d43d58f63ac17658d2b3b7ef878e28497594abd669a6ca25ce0afbec"} Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.248719 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.329596 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fccf8d9d-jqxpt"] Jan 28 18:56:09 crc kubenswrapper[4721]: W0128 18:56:09.329976 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb596f4de_be4e_4c2a_8524_fca9afc03775.slice/crio-ba17100983585899cee20f90600fa890534dd6ee2d49857ff0fb623773c5f4e1 WatchSource:0}: Error finding container ba17100983585899cee20f90600fa890534dd6ee2d49857ff0fb623773c5f4e1: Status 404 returned error can't find the container with id ba17100983585899cee20f90600fa890534dd6ee2d49857ff0fb623773c5f4e1 Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.331700 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.664456 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:09 crc kubenswrapper[4721]: W0128 18:56:09.669451 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6931e142_6b1b_43e9_9515_2bdcfd78e69a.slice/crio-4040a86a07bda03079057d99670472868a165cc54669e4c9f5c82d1e7e48c075 WatchSource:0}: Error finding container 4040a86a07bda03079057d99670472868a165cc54669e4c9f5c82d1e7e48c075: Status 404 returned error can't find the container with id 4040a86a07bda03079057d99670472868a165cc54669e4c9f5c82d1e7e48c075 Jan 28 18:56:09 crc kubenswrapper[4721]: I0128 18:56:09.993029 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:10 crc kubenswrapper[4721]: W0128 18:56:10.003034 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf645046a_6c22_410b_86a6_9f9aedff30db.slice/crio-1cb1a5750d50006a78c12b05f96b34189a2ba30446fc9df35f81effe37b072d4 WatchSource:0}: Error finding container 1cb1a5750d50006a78c12b05f96b34189a2ba30446fc9df35f81effe37b072d4: Status 404 returned error can't find the container with id 1cb1a5750d50006a78c12b05f96b34189a2ba30446fc9df35f81effe37b072d4 Jan 28 18:56:10 crc kubenswrapper[4721]: I0128 18:56:10.237343 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6931e142-6b1b-43e9-9515-2bdcfd78e69a","Type":"ContainerStarted","Data":"4040a86a07bda03079057d99670472868a165cc54669e4c9f5c82d1e7e48c075"} Jan 28 18:56:10 crc kubenswrapper[4721]: I0128 18:56:10.239089 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f645046a-6c22-410b-86a6-9f9aedff30db","Type":"ContainerStarted","Data":"1cb1a5750d50006a78c12b05f96b34189a2ba30446fc9df35f81effe37b072d4"} Jan 28 18:56:10 crc kubenswrapper[4721]: I0128 18:56:10.240424 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fccf8d9d-jqxpt" event={"ID":"b596f4de-be4e-4c2a-8524-fca9afc03775","Type":"ContainerStarted","Data":"ba17100983585899cee20f90600fa890534dd6ee2d49857ff0fb623773c5f4e1"} Jan 28 18:56:11 crc kubenswrapper[4721]: I0128 18:56:11.273230 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f645046a-6c22-410b-86a6-9f9aedff30db","Type":"ContainerStarted","Data":"1c0353017429e23779f940cc59166a7735e58257832558b0df16faf647ce25d9"} Jan 28 18:56:11 crc kubenswrapper[4721]: I0128 18:56:11.276860 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-649bf84c5b-p55hh" event={"ID":"65d3ed26-a43e-491f-8170-7d65eb15bd4f","Type":"ContainerStarted","Data":"7d3647343ea1bb010bb6f756bbe8c043bb3ef2a9dd83b66f0a3cedfcc37239cf"} Jan 28 18:56:11 crc kubenswrapper[4721]: I0128 18:56:11.874708 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:11 crc kubenswrapper[4721]: I0128 18:56:11.991725 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.311997 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fccf8d9d-jqxpt" event={"ID":"b596f4de-be4e-4c2a-8524-fca9afc03775","Type":"ContainerStarted","Data":"8aa445b2da1cbf4e452f4f2fc34fa3579a1b8560d58a0f8a970115225d162a85"} Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.312576 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.317816 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6931e142-6b1b-43e9-9515-2bdcfd78e69a","Type":"ContainerStarted","Data":"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da"} Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.346586 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7fccf8d9d-jqxpt" podStartSLOduration=4.346558931 podStartE2EDuration="4.346558931s" podCreationTimestamp="2026-01-28 18:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:12.337545287 +0000 UTC m=+1338.062850867" watchObservedRunningTime="2026-01-28 18:56:12.346558931 +0000 UTC m=+1338.071864491" Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.347576 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-649bf84c5b-p55hh" event={"ID":"65d3ed26-a43e-491f-8170-7d65eb15bd4f","Type":"ContainerStarted","Data":"19fac1308ae337004fdf3cfda1dfe901ebfa56b69b065d1dc73b4ebce61bd354"} Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.348918 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.348952 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.352646 4721 generic.go:334] "Generic (PLEG): container finished" podID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerID="3eddc61c685a404ffae2dcd467a01e5d493fda0ccd3d751df6e1bbcf5e264670" exitCode=0 Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.352689 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" event={"ID":"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb","Type":"ContainerDied","Data":"3eddc61c685a404ffae2dcd467a01e5d493fda0ccd3d751df6e1bbcf5e264670"} Jan 28 18:56:12 crc kubenswrapper[4721]: I0128 18:56:12.454912 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-649bf84c5b-p55hh" podStartSLOduration=5.454885804 podStartE2EDuration="5.454885804s" podCreationTimestamp="2026-01-28 18:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:12.405577431 +0000 UTC m=+1338.130882991" watchObservedRunningTime="2026-01-28 18:56:12.454885804 +0000 UTC m=+1338.180191364" Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.375904 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-qbnjm" event={"ID":"6d4d13db-d2ce-4194-841a-c50b85a2887c","Type":"ContainerStarted","Data":"e1d77b470ef972c00ece8bd31dd0f00d8bd0fecc4f5529a21075145a4929820f"} Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.385705 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" event={"ID":"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb","Type":"ContainerStarted","Data":"4df671ac7e52a9f8bce8f04593ba56480faf8889a47763d7399ba878b92a30d7"} Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.390040 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f645046a-6c22-410b-86a6-9f9aedff30db","Type":"ContainerStarted","Data":"1a06be5cdebbdf72439583839bfe788f477b59df04227398390c0a24a6cc1a38"} Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.390217 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-log" containerID="cri-o://1c0353017429e23779f940cc59166a7735e58257832558b0df16faf647ce25d9" gracePeriod=30 Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.390357 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-httpd" containerID="cri-o://1a06be5cdebbdf72439583839bfe788f477b59df04227398390c0a24a6cc1a38" gracePeriod=30 Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.398349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4rqtv" event={"ID":"5a0d808e-8db2-4d8b-a02e-5f04c991fb44","Type":"ContainerStarted","Data":"56ab69b31d63a6b1c62dd761dae51e64e5951280529007a760301c0b8d5362ef"} Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.419121 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.419100735 podStartE2EDuration="6.419100735s" podCreationTimestamp="2026-01-28 18:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:13.416216104 +0000 UTC m=+1339.141521674" watchObservedRunningTime="2026-01-28 18:56:13.419100735 +0000 UTC m=+1339.144406295" Jan 28 18:56:13 crc kubenswrapper[4721]: I0128 18:56:13.433696 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4rqtv" podStartSLOduration=4.13407906 podStartE2EDuration="1m1.433665954s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="2026-01-28 18:55:14.744627781 +0000 UTC m=+1280.469933341" lastFinishedPulling="2026-01-28 18:56:12.044214675 +0000 UTC m=+1337.769520235" observedRunningTime="2026-01-28 18:56:13.431635369 +0000 UTC m=+1339.156940929" watchObservedRunningTime="2026-01-28 18:56:13.433665954 +0000 UTC m=+1339.158971514" Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.412097 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6931e142-6b1b-43e9-9515-2bdcfd78e69a","Type":"ContainerStarted","Data":"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424"} Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.412202 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-log" containerID="cri-o://c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da" gracePeriod=30 Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.412237 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-httpd" containerID="cri-o://65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424" gracePeriod=30 Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.419203 4721 generic.go:334] "Generic (PLEG): container finished" podID="d03058f5-d416-467a-b33c-36de7e5b6008" containerID="dea0b4596c32b14aa6c542395de1c7c3b3e8187a0308f5d186e77f72d7edd84b" exitCode=0 Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.419310 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g5v9q" event={"ID":"d03058f5-d416-467a-b33c-36de7e5b6008","Type":"ContainerDied","Data":"dea0b4596c32b14aa6c542395de1c7c3b3e8187a0308f5d186e77f72d7edd84b"} Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.425659 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-spxh4" event={"ID":"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7","Type":"ContainerStarted","Data":"29ccd2e322952548c13cc7d2af0107fc873f99ee27ce312b7118d16c9632610a"} Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.432077 4721 generic.go:334] "Generic (PLEG): container finished" podID="f645046a-6c22-410b-86a6-9f9aedff30db" containerID="1a06be5cdebbdf72439583839bfe788f477b59df04227398390c0a24a6cc1a38" exitCode=0 Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.432113 4721 generic.go:334] "Generic (PLEG): container finished" podID="f645046a-6c22-410b-86a6-9f9aedff30db" containerID="1c0353017429e23779f940cc59166a7735e58257832558b0df16faf647ce25d9" exitCode=143 Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.432220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f645046a-6c22-410b-86a6-9f9aedff30db","Type":"ContainerDied","Data":"1a06be5cdebbdf72439583839bfe788f477b59df04227398390c0a24a6cc1a38"} Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.432301 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f645046a-6c22-410b-86a6-9f9aedff30db","Type":"ContainerDied","Data":"1c0353017429e23779f940cc59166a7735e58257832558b0df16faf647ce25d9"} Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.433066 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.451631 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.451600027 podStartE2EDuration="7.451600027s" podCreationTimestamp="2026-01-28 18:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:14.433346352 +0000 UTC m=+1340.158651922" watchObservedRunningTime="2026-01-28 18:56:14.451600027 +0000 UTC m=+1340.176905607" Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.480458 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-spxh4" podStartSLOduration=4.686975901 podStartE2EDuration="1m2.480414435s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="2026-01-28 18:55:14.261565141 +0000 UTC m=+1279.986870701" lastFinishedPulling="2026-01-28 18:56:12.055003675 +0000 UTC m=+1337.780309235" observedRunningTime="2026-01-28 18:56:14.475204461 +0000 UTC m=+1340.200510021" watchObservedRunningTime="2026-01-28 18:56:14.480414435 +0000 UTC m=+1340.205719995" Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.514032 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-qbnjm" podStartSLOduration=5.204327402 podStartE2EDuration="1m2.514003243s" podCreationTimestamp="2026-01-28 18:55:12 +0000 UTC" firstStartedPulling="2026-01-28 18:55:14.744236879 +0000 UTC m=+1280.469542439" lastFinishedPulling="2026-01-28 18:56:12.05391271 +0000 UTC m=+1337.779218280" observedRunningTime="2026-01-28 18:56:14.499064382 +0000 UTC m=+1340.224369942" watchObservedRunningTime="2026-01-28 18:56:14.514003243 +0000 UTC m=+1340.239308803" Jan 28 18:56:14 crc kubenswrapper[4721]: I0128 18:56:14.540606 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" podStartSLOduration=7.540448726 podStartE2EDuration="7.540448726s" podCreationTimestamp="2026-01-28 18:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:14.520058734 +0000 UTC m=+1340.245364314" watchObservedRunningTime="2026-01-28 18:56:14.540448726 +0000 UTC m=+1340.265754316" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.140306 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.287894 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bppn8\" (UniqueName: \"kubernetes.io/projected/6931e142-6b1b-43e9-9515-2bdcfd78e69a-kube-api-access-bppn8\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.287984 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-config-data\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288025 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-httpd-run\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288136 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-logs\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288188 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-combined-ca-bundle\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288224 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-scripts\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288403 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\" (UID: \"6931e142-6b1b-43e9-9515-2bdcfd78e69a\") " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288651 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.288680 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-logs" (OuterVolumeSpecName: "logs") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.289293 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.289316 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6931e142-6b1b-43e9-9515-2bdcfd78e69a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.296444 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-scripts" (OuterVolumeSpecName: "scripts") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.298755 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6931e142-6b1b-43e9-9515-2bdcfd78e69a-kube-api-access-bppn8" (OuterVolumeSpecName: "kube-api-access-bppn8") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "kube-api-access-bppn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.311246 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb" (OuterVolumeSpecName: "glance") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.321422 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.346215 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-config-data" (OuterVolumeSpecName: "config-data") pod "6931e142-6b1b-43e9-9515-2bdcfd78e69a" (UID: "6931e142-6b1b-43e9-9515-2bdcfd78e69a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.391972 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") on node \"crc\" " Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.392027 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bppn8\" (UniqueName: \"kubernetes.io/projected/6931e142-6b1b-43e9-9515-2bdcfd78e69a-kube-api-access-bppn8\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.392047 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.392062 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.392075 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6931e142-6b1b-43e9-9515-2bdcfd78e69a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.418938 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.419218 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb") on node "crc" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447476 4721 generic.go:334] "Generic (PLEG): container finished" podID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerID="65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424" exitCode=143 Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447516 4721 generic.go:334] "Generic (PLEG): container finished" podID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerID="c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da" exitCode=143 Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447579 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447586 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6931e142-6b1b-43e9-9515-2bdcfd78e69a","Type":"ContainerDied","Data":"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424"} Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447688 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6931e142-6b1b-43e9-9515-2bdcfd78e69a","Type":"ContainerDied","Data":"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da"} Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447706 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6931e142-6b1b-43e9-9515-2bdcfd78e69a","Type":"ContainerDied","Data":"4040a86a07bda03079057d99670472868a165cc54669e4c9f5c82d1e7e48c075"} Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.447731 4721 scope.go:117] "RemoveContainer" containerID="65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.494093 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.521226 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.585104 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.585145 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:15 crc kubenswrapper[4721]: E0128 18:56:15.585601 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-httpd" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.585616 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-httpd" Jan 28 18:56:15 crc kubenswrapper[4721]: E0128 18:56:15.585628 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-log" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.585656 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-log" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.585853 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-httpd" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.585875 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" containerName="glance-log" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.587030 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.593068 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.593408 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.609695 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.700712 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.700768 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.700828 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctlc\" (UniqueName: \"kubernetes.io/projected/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-kube-api-access-bctlc\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.700859 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.700945 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.701028 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-logs\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.701070 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.701088 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.804732 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.804844 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-logs\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.804909 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.804929 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.805040 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.805072 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.805120 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctlc\" (UniqueName: \"kubernetes.io/projected/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-kube-api-access-bctlc\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.805156 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.805730 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.806649 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-logs\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.812270 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.812287 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.814865 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.815754 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.815817 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cbf5faf63d16a5d12a5e9b11b66b2cf989de626a136bdd39a47c0348964ea03b/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.833517 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.841936 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctlc\" (UniqueName: \"kubernetes.io/projected/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-kube-api-access-bctlc\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.876893 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " pod="openstack/glance-default-external-api-0" Jan 28 18:56:15 crc kubenswrapper[4721]: I0128 18:56:15.908559 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:56:17 crc kubenswrapper[4721]: I0128 18:56:17.474567 4721 generic.go:334] "Generic (PLEG): container finished" podID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" containerID="56ab69b31d63a6b1c62dd761dae51e64e5951280529007a760301c0b8d5362ef" exitCode=0 Jan 28 18:56:17 crc kubenswrapper[4721]: I0128 18:56:17.474799 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4rqtv" event={"ID":"5a0d808e-8db2-4d8b-a02e-5f04c991fb44","Type":"ContainerDied","Data":"56ab69b31d63a6b1c62dd761dae51e64e5951280529007a760301c0b8d5362ef"} Jan 28 18:56:17 crc kubenswrapper[4721]: I0128 18:56:17.542723 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6931e142-6b1b-43e9-9515-2bdcfd78e69a" path="/var/lib/kubelet/pods/6931e142-6b1b-43e9-9515-2bdcfd78e69a/volumes" Jan 28 18:56:18 crc kubenswrapper[4721]: I0128 18:56:18.138328 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:18 crc kubenswrapper[4721]: I0128 18:56:18.208414 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-2zk48"] Jan 28 18:56:18 crc kubenswrapper[4721]: I0128 18:56:18.208798 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="dnsmasq-dns" containerID="cri-o://ab3ae24f436a6b7f1f92cc7c1ad7abbfb9c2b71a4bc5c792c127d2cdcfa8665f" gracePeriod=10 Jan 28 18:56:18 crc kubenswrapper[4721]: I0128 18:56:18.491916 4721 generic.go:334] "Generic (PLEG): container finished" podID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerID="ab3ae24f436a6b7f1f92cc7c1ad7abbfb9c2b71a4bc5c792c127d2cdcfa8665f" exitCode=0 Jan 28 18:56:18 crc kubenswrapper[4721]: I0128 18:56:18.492016 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" event={"ID":"24b34696-1be6-4ee8-8161-0b3ba8119191","Type":"ContainerDied","Data":"ab3ae24f436a6b7f1f92cc7c1ad7abbfb9c2b71a4bc5c792c127d2cdcfa8665f"} Jan 28 18:56:18 crc kubenswrapper[4721]: I0128 18:56:18.538011 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Jan 28 18:56:19 crc kubenswrapper[4721]: I0128 18:56:19.736917 4721 scope.go:117] "RemoveContainer" containerID="c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da" Jan 28 18:56:19 crc kubenswrapper[4721]: I0128 18:56:19.856262 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:19 crc kubenswrapper[4721]: I0128 18:56:19.879351 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:56:19 crc kubenswrapper[4721]: I0128 18:56:19.892413 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.010827 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtf6\" (UniqueName: \"kubernetes.io/projected/f645046a-6c22-410b-86a6-9f9aedff30db-kube-api-access-jmtf6\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.010939 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-httpd-run\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.010971 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-combined-ca-bundle\") pod \"d03058f5-d416-467a-b33c-36de7e5b6008\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011004 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhsct\" (UniqueName: \"kubernetes.io/projected/d03058f5-d416-467a-b33c-36de7e5b6008-kube-api-access-xhsct\") pod \"d03058f5-d416-467a-b33c-36de7e5b6008\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011032 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlls6\" (UniqueName: \"kubernetes.io/projected/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-kube-api-access-qlls6\") pod \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011098 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-scripts\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011302 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011350 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-config\") pod \"d03058f5-d416-467a-b33c-36de7e5b6008\" (UID: \"d03058f5-d416-467a-b33c-36de7e5b6008\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011384 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-config-data\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011563 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-logs\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011614 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-combined-ca-bundle\") pod \"f645046a-6c22-410b-86a6-9f9aedff30db\" (UID: \"f645046a-6c22-410b-86a6-9f9aedff30db\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-db-sync-config-data\") pod \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.011688 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-combined-ca-bundle\") pod \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\" (UID: \"5a0d808e-8db2-4d8b-a02e-5f04c991fb44\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.012934 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.012974 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-logs" (OuterVolumeSpecName: "logs") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.019557 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-kube-api-access-qlls6" (OuterVolumeSpecName: "kube-api-access-qlls6") pod "5a0d808e-8db2-4d8b-a02e-5f04c991fb44" (UID: "5a0d808e-8db2-4d8b-a02e-5f04c991fb44"). InnerVolumeSpecName "kube-api-access-qlls6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.022187 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5a0d808e-8db2-4d8b-a02e-5f04c991fb44" (UID: "5a0d808e-8db2-4d8b-a02e-5f04c991fb44"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.024122 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f645046a-6c22-410b-86a6-9f9aedff30db-kube-api-access-jmtf6" (OuterVolumeSpecName: "kube-api-access-jmtf6") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "kube-api-access-jmtf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.024715 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03058f5-d416-467a-b33c-36de7e5b6008-kube-api-access-xhsct" (OuterVolumeSpecName: "kube-api-access-xhsct") pod "d03058f5-d416-467a-b33c-36de7e5b6008" (UID: "d03058f5-d416-467a-b33c-36de7e5b6008"). InnerVolumeSpecName "kube-api-access-xhsct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.030499 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-scripts" (OuterVolumeSpecName: "scripts") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.044900 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2" (OuterVolumeSpecName: "glance") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.054553 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.068858 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a0d808e-8db2-4d8b-a02e-5f04c991fb44" (UID: "5a0d808e-8db2-4d8b-a02e-5f04c991fb44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.077694 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d03058f5-d416-467a-b33c-36de7e5b6008" (UID: "d03058f5-d416-467a-b33c-36de7e5b6008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.082577 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-config" (OuterVolumeSpecName: "config") pod "d03058f5-d416-467a-b33c-36de7e5b6008" (UID: "d03058f5-d416-467a-b33c-36de7e5b6008"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.106968 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-config-data" (OuterVolumeSpecName: "config-data") pod "f645046a-6c22-410b-86a6-9f9aedff30db" (UID: "f645046a-6c22-410b-86a6-9f9aedff30db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.114905 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.115761 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.115928 4721 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116033 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116156 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmtf6\" (UniqueName: \"kubernetes.io/projected/f645046a-6c22-410b-86a6-9f9aedff30db-kube-api-access-jmtf6\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116264 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f645046a-6c22-410b-86a6-9f9aedff30db-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116377 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116445 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhsct\" (UniqueName: \"kubernetes.io/projected/d03058f5-d416-467a-b33c-36de7e5b6008-kube-api-access-xhsct\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116548 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlls6\" (UniqueName: \"kubernetes.io/projected/5a0d808e-8db2-4d8b-a02e-5f04c991fb44-kube-api-access-qlls6\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116614 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116709 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") on node \"crc\" " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116787 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d03058f5-d416-467a-b33c-36de7e5b6008-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.116858 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f645046a-6c22-410b-86a6-9f9aedff30db-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.152234 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.152483 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2") on node "crc" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.218808 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.430857 4721 scope.go:117] "RemoveContainer" containerID="65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.431543 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424\": container with ID starting with 65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424 not found: ID does not exist" containerID="65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.431586 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424"} err="failed to get container status \"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424\": rpc error: code = NotFound desc = could not find container \"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424\": container with ID starting with 65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424 not found: ID does not exist" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.431617 4721 scope.go:117] "RemoveContainer" containerID="c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.432062 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da\": container with ID starting with c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da not found: ID does not exist" containerID="c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.432116 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da"} err="failed to get container status \"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da\": rpc error: code = NotFound desc = could not find container \"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da\": container with ID starting with c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da not found: ID does not exist" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.432235 4721 scope.go:117] "RemoveContainer" containerID="65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.432704 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424"} err="failed to get container status \"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424\": rpc error: code = NotFound desc = could not find container \"65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424\": container with ID starting with 65457e779ed6eaa67433fd020c5b7271021fc16e45b5c5f26d77c7d28fe76424 not found: ID does not exist" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.432739 4721 scope.go:117] "RemoveContainer" containerID="c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.433324 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da"} err="failed to get container status \"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da\": rpc error: code = NotFound desc = could not find container \"c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da\": container with ID starting with c6d55078bd9c8d9515116f01e9e183c286741d3e5d8feac36540d45a43dcf9da not found: ID does not exist" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.455530 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.530910 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-sb\") pod \"24b34696-1be6-4ee8-8161-0b3ba8119191\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.531401 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-swift-storage-0\") pod \"24b34696-1be6-4ee8-8161-0b3ba8119191\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.531545 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-svc\") pod \"24b34696-1be6-4ee8-8161-0b3ba8119191\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.531566 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79hk7\" (UniqueName: \"kubernetes.io/projected/24b34696-1be6-4ee8-8161-0b3ba8119191-kube-api-access-79hk7\") pod \"24b34696-1be6-4ee8-8161-0b3ba8119191\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.531622 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-config\") pod \"24b34696-1be6-4ee8-8161-0b3ba8119191\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.531648 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-nb\") pod \"24b34696-1be6-4ee8-8161-0b3ba8119191\" (UID: \"24b34696-1be6-4ee8-8161-0b3ba8119191\") " Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.548853 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b34696-1be6-4ee8-8161-0b3ba8119191-kube-api-access-79hk7" (OuterVolumeSpecName: "kube-api-access-79hk7") pod "24b34696-1be6-4ee8-8161-0b3ba8119191" (UID: "24b34696-1be6-4ee8-8161-0b3ba8119191"). InnerVolumeSpecName "kube-api-access-79hk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.601737 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.601685 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24b34696-1be6-4ee8-8161-0b3ba8119191" (UID: "24b34696-1be6-4ee8-8161-0b3ba8119191"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.602194 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f645046a-6c22-410b-86a6-9f9aedff30db","Type":"ContainerDied","Data":"1cb1a5750d50006a78c12b05f96b34189a2ba30446fc9df35f81effe37b072d4"} Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.602345 4721 scope.go:117] "RemoveContainer" containerID="1a06be5cdebbdf72439583839bfe788f477b59df04227398390c0a24a6cc1a38" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.618703 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-config" (OuterVolumeSpecName: "config") pod "24b34696-1be6-4ee8-8161-0b3ba8119191" (UID: "24b34696-1be6-4ee8-8161-0b3ba8119191"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.622788 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g5v9q" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.623532 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g5v9q" event={"ID":"d03058f5-d416-467a-b33c-36de7e5b6008","Type":"ContainerDied","Data":"7a7a11f296a5bc93b1de74ee528f9312ef8584a3f818339fca072364280ff421"} Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.623580 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a7a11f296a5bc93b1de74ee528f9312ef8584a3f818339fca072364280ff421" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.626126 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24b34696-1be6-4ee8-8161-0b3ba8119191" (UID: "24b34696-1be6-4ee8-8161-0b3ba8119191"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.648864 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.648890 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79hk7\" (UniqueName: \"kubernetes.io/projected/24b34696-1be6-4ee8-8161-0b3ba8119191-kube-api-access-79hk7\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.648982 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.648996 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.655034 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" event={"ID":"24b34696-1be6-4ee8-8161-0b3ba8119191","Type":"ContainerDied","Data":"30d1bbb05937181306468aec6d00e032c3102725b4d4477bb994f0e3b061e9f4"} Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.664507 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4rqtv" event={"ID":"5a0d808e-8db2-4d8b-a02e-5f04c991fb44","Type":"ContainerDied","Data":"6fbb0d60559c0c77e834c58ae99800ff9b12243ae23ea2a2761f5bbcd6f6f1a5"} Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.664821 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbb0d60559c0c77e834c58ae99800ff9b12243ae23ea2a2761f5bbcd6f6f1a5" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.664768 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-2zk48" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.664899 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4rqtv" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.702606 4721 scope.go:117] "RemoveContainer" containerID="1c0353017429e23779f940cc59166a7735e58257832558b0df16faf647ce25d9" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.729992 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24b34696-1be6-4ee8-8161-0b3ba8119191" (UID: "24b34696-1be6-4ee8-8161-0b3ba8119191"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.750547 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24b34696-1be6-4ee8-8161-0b3ba8119191" (UID: "24b34696-1be6-4ee8-8161-0b3ba8119191"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.796018 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.796055 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24b34696-1be6-4ee8-8161-0b3ba8119191-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.852602 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.905940 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.980332 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.980875 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-log" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.980890 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-log" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.980909 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" containerName="barbican-db-sync" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.980918 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" containerName="barbican-db-sync" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.980933 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-httpd" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.980942 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-httpd" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.980962 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="init" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.980970 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="init" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.980981 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03058f5-d416-467a-b33c-36de7e5b6008" containerName="neutron-db-sync" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.980989 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03058f5-d416-467a-b33c-36de7e5b6008" containerName="neutron-db-sync" Jan 28 18:56:20 crc kubenswrapper[4721]: E0128 18:56:20.981005 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="dnsmasq-dns" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.981021 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="dnsmasq-dns" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.981627 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" containerName="barbican-db-sync" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.981663 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-httpd" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.981680 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03058f5-d416-467a-b33c-36de7e5b6008" containerName="neutron-db-sync" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.981691 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" containerName="glance-log" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.981706 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" containerName="dnsmasq-dns" Jan 28 18:56:20 crc kubenswrapper[4721]: I0128 18:56:20.997014 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.005001 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.013694 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.060812 4721 scope.go:117] "RemoveContainer" containerID="ab3ae24f436a6b7f1f92cc7c1ad7abbfb9c2b71a4bc5c792c127d2cdcfa8665f" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.077269 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122327 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122394 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122485 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122553 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122585 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122641 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122664 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-logs\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.122722 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnrp\" (UniqueName: \"kubernetes.io/projected/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-kube-api-access-lvnrp\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.179337 4721 scope.go:117] "RemoveContainer" containerID="a4e7167b8cd1523d74b3cc700d8fe1f1e1c1107506c5114057c9e29a9517a909" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.186149 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-2zk48"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.206637 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-2zk48"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.227573 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnrp\" (UniqueName: \"kubernetes.io/projected/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-kube-api-access-lvnrp\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228022 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228048 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228116 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228164 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228188 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228242 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228261 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-logs\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.228802 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-logs\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.229459 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.242907 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.242957 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e0fb27f96ed2a0ff9a552b58e2db95cb7dc681ae95f2f3784ea1f011e1d9aaa2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.266556 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.269292 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.276033 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.281319 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.282048 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnrp\" (UniqueName: \"kubernetes.io/projected/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-kube-api-access-lvnrp\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.367412 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tjbpz"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.369548 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.388437 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tjbpz"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.451339 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.556942 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.557959 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f466m\" (UniqueName: \"kubernetes.io/projected/b57b7161-a5ce-4399-86ed-68478cdc6df5-kube-api-access-f466m\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.558034 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.558462 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.558540 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.558809 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-config\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.573061 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b34696-1be6-4ee8-8161-0b3ba8119191" path="/var/lib/kubelet/pods/24b34696-1be6-4ee8-8161-0b3ba8119191/volumes" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.573998 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f645046a-6c22-410b-86a6-9f9aedff30db" path="/var/lib/kubelet/pods/f645046a-6c22-410b-86a6-9f9aedff30db/volumes" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.576117 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f8b48b786-fcdpx"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.590042 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7855694cbf-6fbkc"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.591560 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.592245 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.599095 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.605201 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.605414 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.605520 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.605752 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kxtdz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.651989 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.660508 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f466m\" (UniqueName: \"kubernetes.io/projected/b57b7161-a5ce-4399-86ed-68478cdc6df5-kube-api-access-f466m\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.660559 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.660628 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.660648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.660708 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-config\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.660745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.661659 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.661717 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.661801 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.662321 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.662906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-config\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.691804 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f8b48b786-fcdpx"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.738879 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd","Type":"ContainerStarted","Data":"ee83c9c5487709a65c7ba20b834c50dbce1f21f41300852f1c0dd07d7bbca8d3"} Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764599 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-combined-ca-bundle\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764683 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gjg\" (UniqueName: \"kubernetes.io/projected/7ae24f09-1a88-4cd4-8959-76b14602141d-kube-api-access-s2gjg\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764730 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae24f09-1a88-4cd4-8959-76b14602141d-logs\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764753 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-config-data-custom\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764795 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-config-data-custom\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764875 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz8r\" (UniqueName: \"kubernetes.io/projected/b950ce3b-33ce-40a9-9b76-45470b0917ec-kube-api-access-pvz8r\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764895 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-combined-ca-bundle\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764924 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-config-data\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764951 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-config-data\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.764982 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b950ce3b-33ce-40a9-9b76-45470b0917ec-logs\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.768525 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f466m\" (UniqueName: \"kubernetes.io/projected/b57b7161-a5ce-4399-86ed-68478cdc6df5-kube-api-access-f466m\") pod \"dnsmasq-dns-55f844cf75-tjbpz\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.794230 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7855694cbf-6fbkc"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.867782 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gjg\" (UniqueName: \"kubernetes.io/projected/7ae24f09-1a88-4cd4-8959-76b14602141d-kube-api-access-s2gjg\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873122 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae24f09-1a88-4cd4-8959-76b14602141d-logs\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873194 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-config-data-custom\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873257 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-config-data-custom\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873517 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz8r\" (UniqueName: \"kubernetes.io/projected/b950ce3b-33ce-40a9-9b76-45470b0917ec-kube-api-access-pvz8r\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873546 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-combined-ca-bundle\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-config-data\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873652 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-config-data\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873704 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b950ce3b-33ce-40a9-9b76-45470b0917ec-logs\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.873814 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-combined-ca-bundle\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.874876 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae24f09-1a88-4cd4-8959-76b14602141d-logs\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.878005 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b950ce3b-33ce-40a9-9b76-45470b0917ec-logs\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.889541 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tjbpz"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.890722 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.901379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-config-data-custom\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.901985 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-combined-ca-bundle\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.907425 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae24f09-1a88-4cd4-8959-76b14602141d-config-data\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.912932 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-config-data-custom\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.922435 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-config-data\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.936070 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b950ce3b-33ce-40a9-9b76-45470b0917ec-combined-ca-bundle\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.936380 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gjg\" (UniqueName: \"kubernetes.io/projected/7ae24f09-1a88-4cd4-8959-76b14602141d-kube-api-access-s2gjg\") pod \"barbican-worker-7855694cbf-6fbkc\" (UID: \"7ae24f09-1a88-4cd4-8959-76b14602141d\") " pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.936827 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz8r\" (UniqueName: \"kubernetes.io/projected/b950ce3b-33ce-40a9-9b76-45470b0917ec-kube-api-access-pvz8r\") pod \"barbican-keystone-listener-5f8b48b786-fcdpx\" (UID: \"b950ce3b-33ce-40a9-9b76-45470b0917ec\") " pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.941779 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7855694cbf-6fbkc" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.942120 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qrqc4"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.945315 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.957504 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qrqc4"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.963310 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84bf7c754-8m5d5"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.966091 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.983889 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b6d5f477b-md9n5"] Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.985895 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.988095 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 18:56:21 crc kubenswrapper[4721]: I0128 18:56:21.993425 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.004801 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.005648 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qmz2x" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.006161 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.006656 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 18:56:22 crc kubenswrapper[4721]: E0128 18:56:22.029324 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.081415 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b6d5f477b-md9n5"] Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103414 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103504 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmq8q\" (UniqueName: \"kubernetes.io/projected/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-kube-api-access-qmq8q\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103558 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103587 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-combined-ca-bundle\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103613 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7rz\" (UniqueName: \"kubernetes.io/projected/50a7b045-31f9-43aa-a484-aa27bdfb5147-kube-api-access-tz7rz\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103635 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-combined-ca-bundle\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103672 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-httpd-config\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103745 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-ovndb-tls-certs\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103794 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103854 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103889 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-config\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data-custom\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103954 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e79c89-d076-4174-b7d4-87295d74b71d-logs\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.103977 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js787\" (UniqueName: \"kubernetes.io/projected/15e79c89-d076-4174-b7d4-87295d74b71d-kube-api-access-js787\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.104019 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-config\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.104115 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.202401 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84bf7c754-8m5d5"] Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216116 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-ovndb-tls-certs\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216300 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216386 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216434 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-config\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216468 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data-custom\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216518 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e79c89-d076-4174-b7d4-87295d74b71d-logs\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216540 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js787\" (UniqueName: \"kubernetes.io/projected/15e79c89-d076-4174-b7d4-87295d74b71d-kube-api-access-js787\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216589 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-config\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216742 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216801 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216861 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmq8q\" (UniqueName: \"kubernetes.io/projected/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-kube-api-access-qmq8q\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216919 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216944 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-combined-ca-bundle\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216967 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7rz\" (UniqueName: \"kubernetes.io/projected/50a7b045-31f9-43aa-a484-aa27bdfb5147-kube-api-access-tz7rz\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.216988 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-combined-ca-bundle\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.217007 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-httpd-config\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.221744 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e79c89-d076-4174-b7d4-87295d74b71d-logs\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.222732 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-config\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.227553 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-ovndb-tls-certs\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.229508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-config\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.237850 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.243418 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data-custom\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.252238 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.252508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.253021 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.253614 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.258870 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-httpd-config\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.259907 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmq8q\" (UniqueName: \"kubernetes.io/projected/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-kube-api-access-qmq8q\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.262031 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-combined-ca-bundle\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.271352 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-combined-ca-bundle\") pod \"neutron-b6d5f477b-md9n5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.288497 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7rz\" (UniqueName: \"kubernetes.io/projected/50a7b045-31f9-43aa-a484-aa27bdfb5147-kube-api-access-tz7rz\") pod \"dnsmasq-dns-85ff748b95-qrqc4\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.296640 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js787\" (UniqueName: \"kubernetes.io/projected/15e79c89-d076-4174-b7d4-87295d74b71d-kube-api-access-js787\") pod \"barbican-api-84bf7c754-8m5d5\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.391416 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.501240 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.528266 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.813380 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a423fddb-4a71-416a-8138-63d58b0350fb","Type":"ContainerStarted","Data":"b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965"} Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.813785 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="sg-core" containerID="cri-o://e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b" gracePeriod=30 Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.814066 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.814392 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="proxy-httpd" containerID="cri-o://b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965" gracePeriod=30 Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.838558 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.863760 4721 generic.go:334] "Generic (PLEG): container finished" podID="6d4d13db-d2ce-4194-841a-c50b85a2887c" containerID="e1d77b470ef972c00ece8bd31dd0f00d8bd0fecc4f5529a21075145a4929820f" exitCode=0 Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.863812 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-qbnjm" event={"ID":"6d4d13db-d2ce-4194-841a-c50b85a2887c","Type":"ContainerDied","Data":"e1d77b470ef972c00ece8bd31dd0f00d8bd0fecc4f5529a21075145a4929820f"} Jan 28 18:56:22 crc kubenswrapper[4721]: W0128 18:56:22.952444 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb950ce3b_33ce_40a9_9b76_45470b0917ec.slice/crio-f87603522e031219a967e92c97dd9185762f851e1d5f90028bd86c00b4b05ce8 WatchSource:0}: Error finding container f87603522e031219a967e92c97dd9185762f851e1d5f90028bd86c00b4b05ce8: Status 404 returned error can't find the container with id f87603522e031219a967e92c97dd9185762f851e1d5f90028bd86c00b4b05ce8 Jan 28 18:56:22 crc kubenswrapper[4721]: I0128 18:56:22.963284 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f8b48b786-fcdpx"] Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.302517 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7855694cbf-6fbkc"] Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.434157 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tjbpz"] Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.789342 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84bf7c754-8m5d5"] Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.802126 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qrqc4"] Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.890094 4721 generic.go:334] "Generic (PLEG): container finished" podID="a423fddb-4a71-416a-8138-63d58b0350fb" containerID="e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b" exitCode=2 Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.890525 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a423fddb-4a71-416a-8138-63d58b0350fb","Type":"ContainerDied","Data":"e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b"} Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.941643 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd","Type":"ContainerStarted","Data":"f132365db32cb07f0b65459435bcf0a76c2b12f00abda76f77d0e25e4c241c69"} Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.944045 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" event={"ID":"50a7b045-31f9-43aa-a484-aa27bdfb5147","Type":"ContainerStarted","Data":"55e00e338e91278fb30b64148a3bc6cb12adb2e94701e67ce56758b48b57aea2"} Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.962498 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" event={"ID":"b57b7161-a5ce-4399-86ed-68478cdc6df5","Type":"ContainerStarted","Data":"60fa8b12e439307ee0af8ccd1b1ba9f2ea74ce9aed6f6f5193432fe4f1510d82"} Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.965872 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855694cbf-6fbkc" event={"ID":"7ae24f09-1a88-4cd4-8959-76b14602141d","Type":"ContainerStarted","Data":"9e5d4856afe94ef1fb49d12f868dd3a5af66fdce0a6a3fde6860e5d0dcde56b1"} Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.978865 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" event={"ID":"b950ce3b-33ce-40a9-9b76-45470b0917ec","Type":"ContainerStarted","Data":"f87603522e031219a967e92c97dd9185762f851e1d5f90028bd86c00b4b05ce8"} Jan 28 18:56:23 crc kubenswrapper[4721]: I0128 18:56:23.991806 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84bf7c754-8m5d5" event={"ID":"15e79c89-d076-4174-b7d4-87295d74b71d","Type":"ContainerStarted","Data":"52483d38a0e137efe241734b4208c788c740429b6d82f8739f9b34d33dd9ec84"} Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:23.999304 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12589ff0-ab4c-4a16-b5bd-7cd433a85c86","Type":"ContainerStarted","Data":"99dbb7626917d81fe3ed180404d6a6b936b170f76b4096a2f20bf341c4279b78"} Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.033606 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b6d5f477b-md9n5"] Jan 28 18:56:24 crc kubenswrapper[4721]: W0128 18:56:24.108368 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod598a7e6f_da5f_4dc3_be56_0dc9b6b13ad5.slice/crio-95110c4401de6c9acf96c83fd6166f89fdb43b1149a732f22b633265c72881b8 WatchSource:0}: Error finding container 95110c4401de6c9acf96c83fd6166f89fdb43b1149a732f22b633265c72881b8: Status 404 returned error can't find the container with id 95110c4401de6c9acf96c83fd6166f89fdb43b1149a732f22b633265c72881b8 Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.370271 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-787c88cc7-8262p"] Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.372836 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.377991 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.378279 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.428399 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-787c88cc7-8262p"] Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.482304 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-public-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.482805 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-internal-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.482940 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-combined-ca-bundle\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.484770 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-httpd-config\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.484827 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-ovndb-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.484966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-config\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.485325 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklj9\" (UniqueName: \"kubernetes.io/projected/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-kube-api-access-dklj9\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601072 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklj9\" (UniqueName: \"kubernetes.io/projected/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-kube-api-access-dklj9\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601274 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-public-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601357 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-internal-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601483 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-combined-ca-bundle\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601514 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-httpd-config\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601540 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-ovndb-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.601602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-config\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.613365 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-ovndb-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.613753 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-config\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.617535 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-httpd-config\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.629056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-public-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.636302 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-combined-ca-bundle\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.650208 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.667315 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-internal-tls-certs\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.674958 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklj9\" (UniqueName: \"kubernetes.io/projected/778b4bd0-5ac3-4a89-b5c8-07f3f52e5804-kube-api-access-dklj9\") pod \"neutron-787c88cc7-8262p\" (UID: \"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804\") " pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.727199 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.814076 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-combined-ca-bundle\") pod \"6d4d13db-d2ce-4194-841a-c50b85a2887c\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.814692 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-certs\") pod \"6d4d13db-d2ce-4194-841a-c50b85a2887c\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.814734 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-config-data\") pod \"6d4d13db-d2ce-4194-841a-c50b85a2887c\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.814783 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-scripts\") pod \"6d4d13db-d2ce-4194-841a-c50b85a2887c\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.814879 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vch8p\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-kube-api-access-vch8p\") pod \"6d4d13db-d2ce-4194-841a-c50b85a2887c\" (UID: \"6d4d13db-d2ce-4194-841a-c50b85a2887c\") " Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.829809 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-kube-api-access-vch8p" (OuterVolumeSpecName: "kube-api-access-vch8p") pod "6d4d13db-d2ce-4194-841a-c50b85a2887c" (UID: "6d4d13db-d2ce-4194-841a-c50b85a2887c"). InnerVolumeSpecName "kube-api-access-vch8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.830635 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-scripts" (OuterVolumeSpecName: "scripts") pod "6d4d13db-d2ce-4194-841a-c50b85a2887c" (UID: "6d4d13db-d2ce-4194-841a-c50b85a2887c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.853320 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-certs" (OuterVolumeSpecName: "certs") pod "6d4d13db-d2ce-4194-841a-c50b85a2887c" (UID: "6d4d13db-d2ce-4194-841a-c50b85a2887c"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.920134 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vch8p\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-kube-api-access-vch8p\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.920173 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6d4d13db-d2ce-4194-841a-c50b85a2887c-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:24 crc kubenswrapper[4721]: I0128 18:56:24.920201 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.020668 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d4d13db-d2ce-4194-841a-c50b85a2887c" (UID: "6d4d13db-d2ce-4194-841a-c50b85a2887c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.035095 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.123427 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-config-data" (OuterVolumeSpecName: "config-data") pod "6d4d13db-d2ce-4194-841a-c50b85a2887c" (UID: "6d4d13db-d2ce-4194-841a-c50b85a2887c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.137110 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d4d13db-d2ce-4194-841a-c50b85a2887c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.141930 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-qbnjm" event={"ID":"6d4d13db-d2ce-4194-841a-c50b85a2887c","Type":"ContainerDied","Data":"f6a20e30099548112c706cf98bb8abea7f1731b3bd5208ec2ec7cc6691dc20ae"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.141987 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a20e30099548112c706cf98bb8abea7f1731b3bd5208ec2ec7cc6691dc20ae" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.142081 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-qbnjm" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.185839 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd","Type":"ContainerStarted","Data":"b0e18ae764a18f24a821f96bb6325b0cfafe8c25620954220b490048d7b70276"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.214436 4721 generic.go:334] "Generic (PLEG): container finished" podID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerID="07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e" exitCode=0 Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.214543 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" event={"ID":"50a7b045-31f9-43aa-a484-aa27bdfb5147","Type":"ContainerDied","Data":"07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.247114 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b6d5f477b-md9n5" event={"ID":"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5","Type":"ContainerStarted","Data":"85136f351f7316d90a281940791aedf2fcda0c293454509ecff435d6368579b7"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.247202 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b6d5f477b-md9n5" event={"ID":"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5","Type":"ContainerStarted","Data":"95110c4401de6c9acf96c83fd6166f89fdb43b1149a732f22b633265c72881b8"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.251909 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.251882541 podStartE2EDuration="10.251882541s" podCreationTimestamp="2026-01-28 18:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:25.248741742 +0000 UTC m=+1350.974047302" watchObservedRunningTime="2026-01-28 18:56:25.251882541 +0000 UTC m=+1350.977188111" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.302372 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.314037 4721 generic.go:334] "Generic (PLEG): container finished" podID="b57b7161-a5ce-4399-86ed-68478cdc6df5" containerID="9890d2a364b22102f2e7c47301b597bb9f49855bc9331caf397cad4f126b59d7" exitCode=0 Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.314152 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" event={"ID":"b57b7161-a5ce-4399-86ed-68478cdc6df5","Type":"ContainerDied","Data":"9890d2a364b22102f2e7c47301b597bb9f49855bc9331caf397cad4f126b59d7"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.394602 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84bf7c754-8m5d5" event={"ID":"15e79c89-d076-4174-b7d4-87295d74b71d","Type":"ContainerStarted","Data":"bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.450600 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12589ff0-ab4c-4a16-b5bd-7cd433a85c86","Type":"ContainerStarted","Data":"d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454396 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-scripts\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454519 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-combined-ca-bundle\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454560 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-log-httpd\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454617 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmnw\" (UniqueName: \"kubernetes.io/projected/a423fddb-4a71-416a-8138-63d58b0350fb-kube-api-access-7xmnw\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454642 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-sg-core-conf-yaml\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454668 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-run-httpd\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.454841 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-config-data\") pod \"a423fddb-4a71-416a-8138-63d58b0350fb\" (UID: \"a423fddb-4a71-416a-8138-63d58b0350fb\") " Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.456037 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.456324 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.456681 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.456703 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a423fddb-4a71-416a-8138-63d58b0350fb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.488319 4721 generic.go:334] "Generic (PLEG): container finished" podID="a423fddb-4a71-416a-8138-63d58b0350fb" containerID="b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965" exitCode=0 Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.488395 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a423fddb-4a71-416a-8138-63d58b0350fb","Type":"ContainerDied","Data":"b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.488562 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a423fddb-4a71-416a-8138-63d58b0350fb","Type":"ContainerDied","Data":"d401325b9e0306d71f3e564195f62ee8d4ae93c32d74ef8516ca8ebb722e700f"} Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.488589 4721 scope.go:117] "RemoveContainer" containerID="b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.488676 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.577772 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-xpxnz"] Jan 28 18:56:25 crc kubenswrapper[4721]: E0128 18:56:25.578408 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="proxy-httpd" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.578428 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="proxy-httpd" Jan 28 18:56:25 crc kubenswrapper[4721]: E0128 18:56:25.578460 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="sg-core" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.578469 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="sg-core" Jan 28 18:56:25 crc kubenswrapper[4721]: E0128 18:56:25.578496 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4d13db-d2ce-4194-841a-c50b85a2887c" containerName="cloudkitty-db-sync" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.578503 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4d13db-d2ce-4194-841a-c50b85a2887c" containerName="cloudkitty-db-sync" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.578752 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="proxy-httpd" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.578780 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4d13db-d2ce-4194-841a-c50b85a2887c" containerName="cloudkitty-db-sync" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.578799 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" containerName="sg-core" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.579921 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.597553 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.597996 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.598211 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.598345 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.598377 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-wcp5f" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.604405 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-xpxnz"] Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.629471 4721 scope.go:117] "RemoveContainer" containerID="e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.682737 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-scripts" (OuterVolumeSpecName: "scripts") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.689542 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a423fddb-4a71-416a-8138-63d58b0350fb-kube-api-access-7xmnw" (OuterVolumeSpecName: "kube-api-access-7xmnw") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "kube-api-access-7xmnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.758548 4721 scope.go:117] "RemoveContainer" containerID="b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965" Jan 28 18:56:25 crc kubenswrapper[4721]: E0128 18:56:25.767872 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965\": container with ID starting with b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965 not found: ID does not exist" containerID="b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.767922 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965"} err="failed to get container status \"b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965\": rpc error: code = NotFound desc = could not find container \"b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965\": container with ID starting with b19e8f5d74d1cea334dacccbbd978f5cc423d5ee131604eba66d474545117965 not found: ID does not exist" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.767956 4721 scope.go:117] "RemoveContainer" containerID="e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b" Jan 28 18:56:25 crc kubenswrapper[4721]: E0128 18:56:25.799136 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b\": container with ID starting with e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b not found: ID does not exist" containerID="e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.799204 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b"} err="failed to get container status \"e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b\": rpc error: code = NotFound desc = could not find container \"e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b\": container with ID starting with e04e9069ff639db76bbbe99569951621e371d8bea9eb9e435b2f18417583544b not found: ID does not exist" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.804740 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhmmw\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-kube-api-access-rhmmw\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.805122 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-config-data\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.805276 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-scripts\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.805437 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-certs\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.805540 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-combined-ca-bundle\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.825528 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.825579 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmnw\" (UniqueName: \"kubernetes.io/projected/a423fddb-4a71-416a-8138-63d58b0350fb-kube-api-access-7xmnw\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.864604 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.866574 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.910256 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.910301 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.936928 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-combined-ca-bundle\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.937140 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhmmw\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-kube-api-access-rhmmw\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.937179 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-config-data\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.937263 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-scripts\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.937341 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-certs\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.937431 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.937451 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.960150 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-combined-ca-bundle\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.967718 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-config-data\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:25 crc kubenswrapper[4721]: I0128 18:56:25.967995 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-certs\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.002389 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-scripts\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.022464 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-config-data" (OuterVolumeSpecName: "config-data") pod "a423fddb-4a71-416a-8138-63d58b0350fb" (UID: "a423fddb-4a71-416a-8138-63d58b0350fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.041513 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a423fddb-4a71-416a-8138-63d58b0350fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.047453 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.074696 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.083074 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhmmw\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-kube-api-access-rhmmw\") pod \"cloudkitty-storageinit-xpxnz\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.231383 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.474793 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.487046 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.534391 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.560649 4721 generic.go:334] "Generic (PLEG): container finished" podID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" containerID="29ccd2e322952548c13cc7d2af0107fc873f99ee27ce312b7118d16c9632610a" exitCode=0 Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.560779 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-spxh4" event={"ID":"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7","Type":"ContainerDied","Data":"29ccd2e322952548c13cc7d2af0107fc873f99ee27ce312b7118d16c9632610a"} Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.571539 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:56:26 crc kubenswrapper[4721]: E0128 18:56:26.572173 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57b7161-a5ce-4399-86ed-68478cdc6df5" containerName="init" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.572214 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57b7161-a5ce-4399-86ed-68478cdc6df5" containerName="init" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.572538 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57b7161-a5ce-4399-86ed-68478cdc6df5" containerName="init" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.575015 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.587270 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.588906 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.607542 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b6d5f477b-md9n5" event={"ID":"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5","Type":"ContainerStarted","Data":"97de22f49da0c15672d79bca2d1dc8c0c67082833d1f3351a7216fbf4b417f7a"} Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.609268 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.616300 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-svc\") pod \"b57b7161-a5ce-4399-86ed-68478cdc6df5\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.616621 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-config\") pod \"b57b7161-a5ce-4399-86ed-68478cdc6df5\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.616821 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-sb\") pod \"b57b7161-a5ce-4399-86ed-68478cdc6df5\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.617013 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-swift-storage-0\") pod \"b57b7161-a5ce-4399-86ed-68478cdc6df5\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.617958 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-nb\") pod \"b57b7161-a5ce-4399-86ed-68478cdc6df5\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.618126 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f466m\" (UniqueName: \"kubernetes.io/projected/b57b7161-a5ce-4399-86ed-68478cdc6df5-kube-api-access-f466m\") pod \"b57b7161-a5ce-4399-86ed-68478cdc6df5\" (UID: \"b57b7161-a5ce-4399-86ed-68478cdc6df5\") " Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.663734 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" event={"ID":"b57b7161-a5ce-4399-86ed-68478cdc6df5","Type":"ContainerDied","Data":"60fa8b12e439307ee0af8ccd1b1ba9f2ea74ce9aed6f6f5193432fe4f1510d82"} Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.664096 4721 scope.go:117] "RemoveContainer" containerID="9890d2a364b22102f2e7c47301b597bb9f49855bc9331caf397cad4f126b59d7" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.664401 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-tjbpz" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.691306 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b57b7161-a5ce-4399-86ed-68478cdc6df5" (UID: "b57b7161-a5ce-4399-86ed-68478cdc6df5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.691661 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.707538 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57b7161-a5ce-4399-86ed-68478cdc6df5-kube-api-access-f466m" (OuterVolumeSpecName: "kube-api-access-f466m") pod "b57b7161-a5ce-4399-86ed-68478cdc6df5" (UID: "b57b7161-a5ce-4399-86ed-68478cdc6df5"). InnerVolumeSpecName "kube-api-access-f466m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.707654 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84bf7c754-8m5d5" event={"ID":"15e79c89-d076-4174-b7d4-87295d74b71d","Type":"ContainerStarted","Data":"c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18"} Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.708730 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.708990 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.709289 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b57b7161-a5ce-4399-86ed-68478cdc6df5" (UID: "b57b7161-a5ce-4399-86ed-68478cdc6df5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738113 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-log-httpd\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738228 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738438 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-scripts\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738496 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738546 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-config-data\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738569 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kv4v\" (UniqueName: \"kubernetes.io/projected/506fcc96-87e5-4718-82bd-7ae3c4919ff5-kube-api-access-7kv4v\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738604 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-run-httpd\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738674 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738685 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.738695 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f466m\" (UniqueName: \"kubernetes.io/projected/b57b7161-a5ce-4399-86ed-68478cdc6df5-kube-api-access-f466m\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.742356 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b57b7161-a5ce-4399-86ed-68478cdc6df5" (UID: "b57b7161-a5ce-4399-86ed-68478cdc6df5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.748909 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12589ff0-ab4c-4a16-b5bd-7cd433a85c86","Type":"ContainerStarted","Data":"ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276"} Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.756506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-config" (OuterVolumeSpecName: "config") pod "b57b7161-a5ce-4399-86ed-68478cdc6df5" (UID: "b57b7161-a5ce-4399-86ed-68478cdc6df5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.756814 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-787c88cc7-8262p"] Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.796642 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.797305 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.805310 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b57b7161-a5ce-4399-86ed-68478cdc6df5" (UID: "b57b7161-a5ce-4399-86ed-68478cdc6df5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.844916 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-config-data\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.844976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kv4v\" (UniqueName: \"kubernetes.io/projected/506fcc96-87e5-4718-82bd-7ae3c4919ff5-kube-api-access-7kv4v\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845013 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-run-httpd\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845089 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-log-httpd\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845122 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845432 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-scripts\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845557 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845670 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845695 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.845713 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57b7161-a5ce-4399-86ed-68478cdc6df5-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.849558 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-run-httpd\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.855906 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b6d5f477b-md9n5" podStartSLOduration=5.855881469 podStartE2EDuration="5.855881469s" podCreationTimestamp="2026-01-28 18:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:26.666844804 +0000 UTC m=+1352.392150374" watchObservedRunningTime="2026-01-28 18:56:26.855881469 +0000 UTC m=+1352.581187029" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.868294 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-log-httpd\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.869943 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.874679 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-config-data\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.886041 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.903948 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-scripts\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.915729 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kv4v\" (UniqueName: \"kubernetes.io/projected/506fcc96-87e5-4718-82bd-7ae3c4919ff5-kube-api-access-7kv4v\") pod \"ceilometer-0\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: I0128 18:56:26.943425 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:56:26 crc kubenswrapper[4721]: E0128 18:56:26.964653 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda423fddb_4a71_416a_8138_63d58b0350fb.slice/crio-d401325b9e0306d71f3e564195f62ee8d4ae93c32d74ef8516ca8ebb722e700f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda423fddb_4a71_416a_8138_63d58b0350fb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4e7a8f9_bf9f_4093_86d5_b7f5f6d925d7.slice/crio-29ccd2e322952548c13cc7d2af0107fc873f99ee27ce312b7118d16c9632610a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.064559 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84bf7c754-8m5d5" podStartSLOduration=6.064532394 podStartE2EDuration="6.064532394s" podCreationTimestamp="2026-01-28 18:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:26.777299833 +0000 UTC m=+1352.502605393" watchObservedRunningTime="2026-01-28 18:56:27.064532394 +0000 UTC m=+1352.789837954" Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.071071 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.071049679 podStartE2EDuration="7.071049679s" podCreationTimestamp="2026-01-28 18:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:26.833130192 +0000 UTC m=+1352.558435742" watchObservedRunningTime="2026-01-28 18:56:27.071049679 +0000 UTC m=+1352.796355239" Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.191550 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tjbpz"] Jan 28 18:56:27 crc kubenswrapper[4721]: W0128 18:56:27.210217 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod429d95dc_53bf_4577_bd4a_3bd60e502895.slice/crio-b8f1e15b11c93e5aa72e1dfa91de30265178b65fa15e9d4fb1eb38b15cb54519 WatchSource:0}: Error finding container b8f1e15b11c93e5aa72e1dfa91de30265178b65fa15e9d4fb1eb38b15cb54519: Status 404 returned error can't find the container with id b8f1e15b11c93e5aa72e1dfa91de30265178b65fa15e9d4fb1eb38b15cb54519 Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.216269 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-tjbpz"] Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.239432 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-xpxnz"] Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.563147 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a423fddb-4a71-416a-8138-63d58b0350fb" path="/var/lib/kubelet/pods/a423fddb-4a71-416a-8138-63d58b0350fb/volumes" Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.564615 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57b7161-a5ce-4399-86ed-68478cdc6df5" path="/var/lib/kubelet/pods/b57b7161-a5ce-4399-86ed-68478cdc6df5/volumes" Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.828330 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.832101 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" event={"ID":"50a7b045-31f9-43aa-a484-aa27bdfb5147","Type":"ContainerStarted","Data":"3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5"} Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.833874 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787c88cc7-8262p" event={"ID":"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804","Type":"ContainerStarted","Data":"eb590013eef6c6af72250180135c9dbbe8f65e6054a7849641f88f910790bd05"} Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.838128 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-xpxnz" event={"ID":"429d95dc-53bf-4577-bd4a-3bd60e502895","Type":"ContainerStarted","Data":"b8f1e15b11c93e5aa72e1dfa91de30265178b65fa15e9d4fb1eb38b15cb54519"} Jan 28 18:56:27 crc kubenswrapper[4721]: I0128 18:56:27.855120 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" podStartSLOduration=6.855096232 podStartE2EDuration="6.855096232s" podCreationTimestamp="2026-01-28 18:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:27.850772977 +0000 UTC m=+1353.576078557" watchObservedRunningTime="2026-01-28 18:56:27.855096232 +0000 UTC m=+1353.580401792" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.716754 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-cfc4cd674-j5vfc"] Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.719415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.724846 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.724952 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.737875 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cfc4cd674-j5vfc"] Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.836777 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-config-data-custom\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.836914 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffw7\" (UniqueName: \"kubernetes.io/projected/f8eb94ee-887b-48f2-808c-2b634928d62e-kube-api-access-gffw7\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.836996 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-internal-tls-certs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.837075 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-config-data\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.837117 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-combined-ca-bundle\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.837192 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8eb94ee-887b-48f2-808c-2b634928d62e-logs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.837281 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-public-tls-certs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.890879 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-spxh4" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.891493 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-spxh4" event={"ID":"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7","Type":"ContainerDied","Data":"5e26bd5e240e1a23b8a12be4bb2bf825ede9379c912e49630365455c852645d3"} Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.891534 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e26bd5e240e1a23b8a12be4bb2bf825ede9379c912e49630365455c852645d3" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.893151 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787c88cc7-8262p" event={"ID":"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804","Type":"ContainerStarted","Data":"d050651d873c764d867b612d99152b13332d7189d10e01a5b88a1ad655d31cd9"} Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.894735 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerStarted","Data":"f80b8eab457a7be4be2f649eb3f67b823608cfc64faffae2e5020ebfe2e65201"} Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.896039 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.938844 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffw7\" (UniqueName: \"kubernetes.io/projected/f8eb94ee-887b-48f2-808c-2b634928d62e-kube-api-access-gffw7\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.938937 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-internal-tls-certs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.938983 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-config-data\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.939005 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-combined-ca-bundle\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.939047 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8eb94ee-887b-48f2-808c-2b634928d62e-logs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.939103 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-public-tls-certs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.939134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-config-data-custom\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.939825 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8eb94ee-887b-48f2-808c-2b634928d62e-logs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.944305 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-combined-ca-bundle\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.944506 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-internal-tls-certs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.946333 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-public-tls-certs\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.963548 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-config-data-custom\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.963840 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8eb94ee-887b-48f2-808c-2b634928d62e-config-data\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:28 crc kubenswrapper[4721]: I0128 18:56:28.968784 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffw7\" (UniqueName: \"kubernetes.io/projected/f8eb94ee-887b-48f2-808c-2b634928d62e-kube-api-access-gffw7\") pod \"barbican-api-cfc4cd674-j5vfc\" (UID: \"f8eb94ee-887b-48f2-808c-2b634928d62e\") " pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.040743 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055001 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-scripts\") pod \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055214 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-combined-ca-bundle\") pod \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055402 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-etc-machine-id\") pod \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055463 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-config-data\") pod \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055556 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2jrb\" (UniqueName: \"kubernetes.io/projected/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-kube-api-access-d2jrb\") pod \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055593 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-db-sync-config-data\") pod \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\" (UID: \"b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7\") " Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.055782 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" (UID: "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.056596 4721 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.060404 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" (UID: "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.071471 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-scripts" (OuterVolumeSpecName: "scripts") pod "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" (UID: "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.088879 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-kube-api-access-d2jrb" (OuterVolumeSpecName: "kube-api-access-d2jrb") pod "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" (UID: "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7"). InnerVolumeSpecName "kube-api-access-d2jrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.150355 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-config-data" (OuterVolumeSpecName: "config-data") pod "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" (UID: "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.152910 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" (UID: "b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.160800 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.160873 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2jrb\" (UniqueName: \"kubernetes.io/projected/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-kube-api-access-d2jrb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.160891 4721 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.160935 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.160951 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.746507 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cfc4cd674-j5vfc"] Jan 28 18:56:29 crc kubenswrapper[4721]: W0128 18:56:29.758085 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8eb94ee_887b_48f2_808c_2b634928d62e.slice/crio-b8ba4716721b09ea1ec59f7cb205dd626a2e0d6b9f09878d834e2eecdfb2916a WatchSource:0}: Error finding container b8ba4716721b09ea1ec59f7cb205dd626a2e0d6b9f09878d834e2eecdfb2916a: Status 404 returned error can't find the container with id b8ba4716721b09ea1ec59f7cb205dd626a2e0d6b9f09878d834e2eecdfb2916a Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.914793 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-xpxnz" event={"ID":"429d95dc-53bf-4577-bd4a-3bd60e502895","Type":"ContainerStarted","Data":"1d4b415623058842553907d8381640f0930cc4c750bf6fa7037c1a2afc1fcfc0"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.918516 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cfc4cd674-j5vfc" event={"ID":"f8eb94ee-887b-48f2-808c-2b634928d62e","Type":"ContainerStarted","Data":"b8ba4716721b09ea1ec59f7cb205dd626a2e0d6b9f09878d834e2eecdfb2916a"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.922969 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787c88cc7-8262p" event={"ID":"778b4bd0-5ac3-4a89-b5c8-07f3f52e5804","Type":"ContainerStarted","Data":"e439b74e8a86de9a73145bb67c13e88eae934b3c5815870ff29787944896e0c4"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.924515 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.936885 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerStarted","Data":"ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.948410 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855694cbf-6fbkc" event={"ID":"7ae24f09-1a88-4cd4-8959-76b14602141d","Type":"ContainerStarted","Data":"685df93e3e44eb8804228770e99f9c782fbdf952e1ee9d5a5b678eca8b67a544"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.948471 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7855694cbf-6fbkc" event={"ID":"7ae24f09-1a88-4cd4-8959-76b14602141d","Type":"ContainerStarted","Data":"10d0f6ce0e2e41e514bc6eb1a93f9d55bcb9d1f59320a03158b52ca8e8a445f3"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.958380 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" event={"ID":"b950ce3b-33ce-40a9-9b76-45470b0917ec","Type":"ContainerStarted","Data":"109abbfbe0ee67dd4da90506bf54b84e8c8c46e8e5dab4a4c94a79b2e80ef5ff"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.958434 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" event={"ID":"b950ce3b-33ce-40a9-9b76-45470b0917ec","Type":"ContainerStarted","Data":"454354e18796a8cda371e1f8ed45e6127dc550cc13c37d4f04598b52baf6411a"} Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.958497 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-spxh4" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.969717 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-xpxnz" podStartSLOduration=4.9696890289999995 podStartE2EDuration="4.969689029s" podCreationTimestamp="2026-01-28 18:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:29.944611759 +0000 UTC m=+1355.669917319" watchObservedRunningTime="2026-01-28 18:56:29.969689029 +0000 UTC m=+1355.694994589" Jan 28 18:56:29 crc kubenswrapper[4721]: I0128 18:56:29.986294 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-787c88cc7-8262p" podStartSLOduration=5.986266161 podStartE2EDuration="5.986266161s" podCreationTimestamp="2026-01-28 18:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:29.966639663 +0000 UTC m=+1355.691945223" watchObservedRunningTime="2026-01-28 18:56:29.986266161 +0000 UTC m=+1355.711571721" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.007826 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7855694cbf-6fbkc" podStartSLOduration=3.379844236 podStartE2EDuration="9.00780337s" podCreationTimestamp="2026-01-28 18:56:21 +0000 UTC" firstStartedPulling="2026-01-28 18:56:23.341627674 +0000 UTC m=+1349.066933234" lastFinishedPulling="2026-01-28 18:56:28.969586808 +0000 UTC m=+1354.694892368" observedRunningTime="2026-01-28 18:56:30.003714661 +0000 UTC m=+1355.729020231" watchObservedRunningTime="2026-01-28 18:56:30.00780337 +0000 UTC m=+1355.733108930" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.040904 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f8b48b786-fcdpx" podStartSLOduration=3.126524414 podStartE2EDuration="9.040872842s" podCreationTimestamp="2026-01-28 18:56:21 +0000 UTC" firstStartedPulling="2026-01-28 18:56:23.018272745 +0000 UTC m=+1348.743578305" lastFinishedPulling="2026-01-28 18:56:28.932621173 +0000 UTC m=+1354.657926733" observedRunningTime="2026-01-28 18:56:30.028801652 +0000 UTC m=+1355.754107232" watchObservedRunningTime="2026-01-28 18:56:30.040872842 +0000 UTC m=+1355.766178402" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.264307 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:30 crc kubenswrapper[4721]: E0128 18:56:30.264826 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" containerName="cinder-db-sync" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.264841 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" containerName="cinder-db-sync" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.265084 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" containerName="cinder-db-sync" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.266428 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.270946 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.278844 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2g228" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.279203 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.279355 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.402294 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.462499 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29653552-40e5-4d60-9284-a92f22c88681-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.462627 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bgc4\" (UniqueName: \"kubernetes.io/projected/29653552-40e5-4d60-9284-a92f22c88681-kube-api-access-2bgc4\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.462783 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.462819 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-scripts\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.462875 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.463016 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.581587 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29653552-40e5-4d60-9284-a92f22c88681-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.581964 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bgc4\" (UniqueName: \"kubernetes.io/projected/29653552-40e5-4d60-9284-a92f22c88681-kube-api-access-2bgc4\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.582029 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.582048 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-scripts\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.582077 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.582157 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.586430 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29653552-40e5-4d60-9284-a92f22c88681-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.599598 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.604299 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qrqc4"] Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.610879 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-scripts\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.612957 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.613422 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.633269 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4wcjf"] Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.635518 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.639846 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bgc4\" (UniqueName: \"kubernetes.io/projected/29653552-40e5-4d60-9284-a92f22c88681-kube-api-access-2bgc4\") pod \"cinder-scheduler-0\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.658367 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4wcjf"] Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.698074 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.700621 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.723769 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.746844 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.801897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.801987 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-scripts\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802029 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data-custom\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802060 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802189 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g92c\" (UniqueName: \"kubernetes.io/projected/1e6ec8db-21ec-44ff-8a93-79273b776f47-kube-api-access-6g92c\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802279 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802309 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-config\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802336 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6ec8db-21ec-44ff-8a93-79273b776f47-logs\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802379 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802401 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvk6k\" (UniqueName: \"kubernetes.io/projected/320cec56-f5f5-4a55-8592-b79f7e9c35b0-kube-api-access-mvk6k\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802429 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6ec8db-21ec-44ff-8a93-79273b776f47-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802481 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802526 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.802747 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905077 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905216 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-scripts\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905259 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data-custom\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905298 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905403 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g92c\" (UniqueName: \"kubernetes.io/projected/1e6ec8db-21ec-44ff-8a93-79273b776f47-kube-api-access-6g92c\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905499 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905534 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-config\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905557 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6ec8db-21ec-44ff-8a93-79273b776f47-logs\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvk6k\" (UniqueName: \"kubernetes.io/projected/320cec56-f5f5-4a55-8592-b79f7e9c35b0-kube-api-access-mvk6k\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905632 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905704 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6ec8db-21ec-44ff-8a93-79273b776f47-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905777 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.905818 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.910263 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.910932 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.916200 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6ec8db-21ec-44ff-8a93-79273b776f47-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.916699 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6ec8db-21ec-44ff-8a93-79273b776f47-logs\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.917024 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-config\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.918996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.920026 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.933276 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-scripts\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.934335 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.936086 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data-custom\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.965635 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.981006 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvk6k\" (UniqueName: \"kubernetes.io/projected/320cec56-f5f5-4a55-8592-b79f7e9c35b0-kube-api-access-mvk6k\") pod \"dnsmasq-dns-5c9776ccc5-4wcjf\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.981943 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g92c\" (UniqueName: \"kubernetes.io/projected/1e6ec8db-21ec-44ff-8a93-79273b776f47-kube-api-access-6g92c\") pod \"cinder-api-0\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " pod="openstack/cinder-api-0" Jan 28 18:56:30 crc kubenswrapper[4721]: I0128 18:56:30.995512 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.027468 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cfc4cd674-j5vfc" event={"ID":"f8eb94ee-887b-48f2-808c-2b634928d62e","Type":"ContainerStarted","Data":"df2a27919e405c9e1f1c2983538ab4ed5d49954ef788feea77cd4bbf1d38cfd4"} Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.031452 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerName="dnsmasq-dns" containerID="cri-o://3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5" gracePeriod=10 Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.043714 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.045193 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.655133 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.655755 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.717163 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.748680 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:31 crc kubenswrapper[4721]: I0128 18:56:31.764429 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.064334 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerStarted","Data":"be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856"} Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.065396 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.066604 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29653552-40e5-4d60-9284-a92f22c88681","Type":"ContainerStarted","Data":"a464c2119b4462a9e5311541d6c1f8b31b74c42bbc44ca58fab6a6368aca29f7"} Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.068240 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-cfc4cd674-j5vfc" event={"ID":"f8eb94ee-887b-48f2-808c-2b634928d62e","Type":"ContainerStarted","Data":"0cd3cd749fdf49319476efd56519bf5b27aef4ad220507891efe84797c0c767c"} Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.069262 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.069298 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.070495 4721 generic.go:334] "Generic (PLEG): container finished" podID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerID="3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5" exitCode=0 Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.071072 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" event={"ID":"50a7b045-31f9-43aa-a484-aa27bdfb5147","Type":"ContainerDied","Data":"3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5"} Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.071326 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" event={"ID":"50a7b045-31f9-43aa-a484-aa27bdfb5147","Type":"ContainerDied","Data":"55e00e338e91278fb30b64148a3bc6cb12adb2e94701e67ce56758b48b57aea2"} Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.071367 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.071396 4721 scope.go:117] "RemoveContainer" containerID="3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.071758 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qrqc4" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.072145 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.155244 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-cfc4cd674-j5vfc" podStartSLOduration=4.15521521 podStartE2EDuration="4.15521521s" podCreationTimestamp="2026-01-28 18:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:32.149332635 +0000 UTC m=+1357.874638205" watchObservedRunningTime="2026-01-28 18:56:32.15521521 +0000 UTC m=+1357.880520770" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.179735 4721 scope.go:117] "RemoveContainer" containerID="07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.197906 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-swift-storage-0\") pod \"50a7b045-31f9-43aa-a484-aa27bdfb5147\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.198258 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-svc\") pod \"50a7b045-31f9-43aa-a484-aa27bdfb5147\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.198292 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-sb\") pod \"50a7b045-31f9-43aa-a484-aa27bdfb5147\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.198326 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-config\") pod \"50a7b045-31f9-43aa-a484-aa27bdfb5147\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.198351 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz7rz\" (UniqueName: \"kubernetes.io/projected/50a7b045-31f9-43aa-a484-aa27bdfb5147-kube-api-access-tz7rz\") pod \"50a7b045-31f9-43aa-a484-aa27bdfb5147\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.198458 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-nb\") pod \"50a7b045-31f9-43aa-a484-aa27bdfb5147\" (UID: \"50a7b045-31f9-43aa-a484-aa27bdfb5147\") " Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.277644 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a7b045-31f9-43aa-a484-aa27bdfb5147-kube-api-access-tz7rz" (OuterVolumeSpecName: "kube-api-access-tz7rz") pod "50a7b045-31f9-43aa-a484-aa27bdfb5147" (UID: "50a7b045-31f9-43aa-a484-aa27bdfb5147"). InnerVolumeSpecName "kube-api-access-tz7rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.307723 4721 scope.go:117] "RemoveContainer" containerID="3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.309561 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz7rz\" (UniqueName: \"kubernetes.io/projected/50a7b045-31f9-43aa-a484-aa27bdfb5147-kube-api-access-tz7rz\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:32 crc kubenswrapper[4721]: E0128 18:56:32.309764 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5\": container with ID starting with 3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5 not found: ID does not exist" containerID="3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.309808 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5"} err="failed to get container status \"3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5\": rpc error: code = NotFound desc = could not find container \"3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5\": container with ID starting with 3e89b7cc268763bd98b6fbbae249247b26e70b929fd8497104e0e4053c3b8de5 not found: ID does not exist" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.309840 4721 scope.go:117] "RemoveContainer" containerID="07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e" Jan 28 18:56:32 crc kubenswrapper[4721]: E0128 18:56:32.311135 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e\": container with ID starting with 07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e not found: ID does not exist" containerID="07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.311195 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e"} err="failed to get container status \"07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e\": rpc error: code = NotFound desc = could not find container \"07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e\": container with ID starting with 07b1736b5c17aba41a6c66541ce35df59c9a656cd4bf85f23a4a0a925ba8897e not found: ID does not exist" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.528668 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50a7b045-31f9-43aa-a484-aa27bdfb5147" (UID: "50a7b045-31f9-43aa-a484-aa27bdfb5147"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.563991 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "50a7b045-31f9-43aa-a484-aa27bdfb5147" (UID: "50a7b045-31f9-43aa-a484-aa27bdfb5147"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.582537 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-config" (OuterVolumeSpecName: "config") pod "50a7b045-31f9-43aa-a484-aa27bdfb5147" (UID: "50a7b045-31f9-43aa-a484-aa27bdfb5147"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.631378 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50a7b045-31f9-43aa-a484-aa27bdfb5147" (UID: "50a7b045-31f9-43aa-a484-aa27bdfb5147"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.642927 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.662696 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4wcjf"] Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.674421 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.674479 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.674495 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.674513 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.703763 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50a7b045-31f9-43aa-a484-aa27bdfb5147" (UID: "50a7b045-31f9-43aa-a484-aa27bdfb5147"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:32 crc kubenswrapper[4721]: I0128 18:56:32.785229 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a7b045-31f9-43aa-a484-aa27bdfb5147-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:33 crc kubenswrapper[4721]: I0128 18:56:33.170487 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6ec8db-21ec-44ff-8a93-79273b776f47","Type":"ContainerStarted","Data":"970c61dcd3ebdb275a566b8580f1b76b67762866821746c7738f127aff1e4ecf"} Jan 28 18:56:33 crc kubenswrapper[4721]: I0128 18:56:33.179807 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" event={"ID":"320cec56-f5f5-4a55-8592-b79f7e9c35b0","Type":"ContainerStarted","Data":"9763f38d2beeeac976c635fa072523dd64bdb06a4c59e7233e2bf819a2b16f30"} Jan 28 18:56:33 crc kubenswrapper[4721]: I0128 18:56:33.235801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerStarted","Data":"abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801"} Jan 28 18:56:33 crc kubenswrapper[4721]: I0128 18:56:33.363254 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qrqc4"] Jan 28 18:56:33 crc kubenswrapper[4721]: I0128 18:56:33.374797 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qrqc4"] Jan 28 18:56:33 crc kubenswrapper[4721]: I0128 18:56:33.558340 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" path="/var/lib/kubelet/pods/50a7b045-31f9-43aa-a484-aa27bdfb5147/volumes" Jan 28 18:56:34 crc kubenswrapper[4721]: I0128 18:56:34.265637 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:34 crc kubenswrapper[4721]: I0128 18:56:34.280787 4721 generic.go:334] "Generic (PLEG): container finished" podID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerID="a3a2f2047ab7a85067112a07348bcafd7bb095cef436c0c9b80c00864c6ef22f" exitCode=0 Jan 28 18:56:34 crc kubenswrapper[4721]: I0128 18:56:34.280895 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:56:34 crc kubenswrapper[4721]: I0128 18:56:34.280906 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:56:34 crc kubenswrapper[4721]: I0128 18:56:34.282254 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" event={"ID":"320cec56-f5f5-4a55-8592-b79f7e9c35b0","Type":"ContainerDied","Data":"a3a2f2047ab7a85067112a07348bcafd7bb095cef436c0c9b80c00864c6ef22f"} Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.308552 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6ec8db-21ec-44ff-8a93-79273b776f47","Type":"ContainerStarted","Data":"e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970"} Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.332226 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29653552-40e5-4d60-9284-a92f22c88681","Type":"ContainerStarted","Data":"e9c945db0336ce5c630a82ac18e793d400662138c72e7bd7ebdfa73de14bec08"} Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.346853 4721 generic.go:334] "Generic (PLEG): container finished" podID="429d95dc-53bf-4577-bd4a-3bd60e502895" containerID="1d4b415623058842553907d8381640f0930cc4c750bf6fa7037c1a2afc1fcfc0" exitCode=0 Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.346945 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-xpxnz" event={"ID":"429d95dc-53bf-4577-bd4a-3bd60e502895","Type":"ContainerDied","Data":"1d4b415623058842553907d8381640f0930cc4c750bf6fa7037c1a2afc1fcfc0"} Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.359385 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" event={"ID":"320cec56-f5f5-4a55-8592-b79f7e9c35b0","Type":"ContainerStarted","Data":"27db63adec4a8e9d38421d1e8e87f7f08eadb03f90cf9d91b34eba5faaa80514"} Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.360355 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.367950 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.433652 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" podStartSLOduration=5.43363329 podStartE2EDuration="5.43363329s" podCreationTimestamp="2026-01-28 18:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:35.427840186 +0000 UTC m=+1361.153145766" watchObservedRunningTime="2026-01-28 18:56:35.43363329 +0000 UTC m=+1361.158938850" Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.484797 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.977981785 podStartE2EDuration="9.48476481s" podCreationTimestamp="2026-01-28 18:56:26 +0000 UTC" firstStartedPulling="2026-01-28 18:56:27.93847993 +0000 UTC m=+1353.663785490" lastFinishedPulling="2026-01-28 18:56:34.445262955 +0000 UTC m=+1360.170568515" observedRunningTime="2026-01-28 18:56:35.458600066 +0000 UTC m=+1361.183905626" watchObservedRunningTime="2026-01-28 18:56:35.48476481 +0000 UTC m=+1361.210070370" Jan 28 18:56:35 crc kubenswrapper[4721]: I0128 18:56:35.924493 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.383397 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6ec8db-21ec-44ff-8a93-79273b776f47","Type":"ContainerStarted","Data":"99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e"} Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.383600 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api-log" containerID="cri-o://e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970" gracePeriod=30 Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.384227 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.384614 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api" containerID="cri-o://99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e" gracePeriod=30 Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.394616 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29653552-40e5-4d60-9284-a92f22c88681","Type":"ContainerStarted","Data":"629bcc53a14f80007a8e58b3c042d7d177f9eb13b91e41b555b1c7436e637172"} Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.402348 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerStarted","Data":"f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63"} Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.421547 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.421519406 podStartE2EDuration="6.421519406s" podCreationTimestamp="2026-01-28 18:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:36.418664525 +0000 UTC m=+1362.143970085" watchObservedRunningTime="2026-01-28 18:56:36.421519406 +0000 UTC m=+1362.146824966" Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.456505 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.321724317 podStartE2EDuration="6.456485671s" podCreationTimestamp="2026-01-28 18:56:30 +0000 UTC" firstStartedPulling="2026-01-28 18:56:31.905661987 +0000 UTC m=+1357.630967547" lastFinishedPulling="2026-01-28 18:56:33.040423331 +0000 UTC m=+1358.765728901" observedRunningTime="2026-01-28 18:56:36.451465171 +0000 UTC m=+1362.176770731" watchObservedRunningTime="2026-01-28 18:56:36.456485671 +0000 UTC m=+1362.181791231" Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.587491 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:36 crc kubenswrapper[4721]: I0128 18:56:36.588108 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.281754 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.389490 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-combined-ca-bundle\") pod \"429d95dc-53bf-4577-bd4a-3bd60e502895\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.389828 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-certs\") pod \"429d95dc-53bf-4577-bd4a-3bd60e502895\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.389869 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhmmw\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-kube-api-access-rhmmw\") pod \"429d95dc-53bf-4577-bd4a-3bd60e502895\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.389892 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-config-data\") pod \"429d95dc-53bf-4577-bd4a-3bd60e502895\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.389939 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-scripts\") pod \"429d95dc-53bf-4577-bd4a-3bd60e502895\" (UID: \"429d95dc-53bf-4577-bd4a-3bd60e502895\") " Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.393063 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.399556 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-kube-api-access-rhmmw" (OuterVolumeSpecName: "kube-api-access-rhmmw") pod "429d95dc-53bf-4577-bd4a-3bd60e502895" (UID: "429d95dc-53bf-4577-bd4a-3bd60e502895"). InnerVolumeSpecName "kube-api-access-rhmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.419413 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-scripts" (OuterVolumeSpecName: "scripts") pod "429d95dc-53bf-4577-bd4a-3bd60e502895" (UID: "429d95dc-53bf-4577-bd4a-3bd60e502895"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.461766 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-certs" (OuterVolumeSpecName: "certs") pod "429d95dc-53bf-4577-bd4a-3bd60e502895" (UID: "429d95dc-53bf-4577-bd4a-3bd60e502895"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.476441 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-xpxnz" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.476708 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-xpxnz" event={"ID":"429d95dc-53bf-4577-bd4a-3bd60e502895","Type":"ContainerDied","Data":"b8f1e15b11c93e5aa72e1dfa91de30265178b65fa15e9d4fb1eb38b15cb54519"} Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.476743 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f1e15b11c93e5aa72e1dfa91de30265178b65fa15e9d4fb1eb38b15cb54519" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.486233 4721 generic.go:334] "Generic (PLEG): container finished" podID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerID="e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970" exitCode=143 Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.486300 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6ec8db-21ec-44ff-8a93-79273b776f47","Type":"ContainerDied","Data":"e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970"} Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.492996 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.493030 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhmmw\" (UniqueName: \"kubernetes.io/projected/429d95dc-53bf-4577-bd4a-3bd60e502895-kube-api-access-rhmmw\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.493045 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.500542 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-config-data" (OuterVolumeSpecName: "config-data") pod "429d95dc-53bf-4577-bd4a-3bd60e502895" (UID: "429d95dc-53bf-4577-bd4a-3bd60e502895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.522867 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "429d95dc-53bf-4577-bd4a-3bd60e502895" (UID: "429d95dc-53bf-4577-bd4a-3bd60e502895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.551709 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.596494 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.596541 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429d95dc-53bf-4577-bd4a-3bd60e502895-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.747256 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:37 crc kubenswrapper[4721]: E0128 18:56:37.747921 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429d95dc-53bf-4577-bd4a-3bd60e502895" containerName="cloudkitty-storageinit" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.747947 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="429d95dc-53bf-4577-bd4a-3bd60e502895" containerName="cloudkitty-storageinit" Jan 28 18:56:37 crc kubenswrapper[4721]: E0128 18:56:37.747975 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerName="dnsmasq-dns" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.747985 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerName="dnsmasq-dns" Jan 28 18:56:37 crc kubenswrapper[4721]: E0128 18:56:37.748000 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerName="init" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.748009 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerName="init" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.748244 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="429d95dc-53bf-4577-bd4a-3bd60e502895" containerName="cloudkitty-storageinit" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.748263 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a7b045-31f9-43aa-a484-aa27bdfb5147" containerName="dnsmasq-dns" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.749092 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.755131 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.786709 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.807546 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.816825 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-scripts\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.816981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.817011 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-certs\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.823812 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2njz\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-kube-api-access-d2njz\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.823932 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.861570 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4wcjf"] Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.862376 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerName="dnsmasq-dns" containerID="cri-o://27db63adec4a8e9d38421d1e8e87f7f08eadb03f90cf9d91b34eba5faaa80514" gracePeriod=10 Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.932711 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2njz\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-kube-api-access-d2njz\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.933044 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.933088 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.933121 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-scripts\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.933984 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.934391 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-certs\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.958356 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.958878 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-scripts\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.959558 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-lkzxk"] Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.961225 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2njz\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-kube-api-access-d2njz\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.961780 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-certs\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.964209 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.969460 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:37 crc kubenswrapper[4721]: I0128 18:56:37.991993 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.039259 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.039344 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-svc\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.039492 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-config\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.039519 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.039572 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlr2g\" (UniqueName: \"kubernetes.io/projected/9b025d86-6a2c-457b-a88d-b697dabc2d7b-kube-api-access-jlr2g\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.039597 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.081698 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-lkzxk"] Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.113095 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.141264 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-config\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.141302 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.141351 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlr2g\" (UniqueName: \"kubernetes.io/projected/9b025d86-6a2c-457b-a88d-b697dabc2d7b-kube-api-access-jlr2g\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.141369 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.141459 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.141492 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-svc\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.144284 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.145190 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-config\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.148491 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.154004 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.157351 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-svc\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.161856 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.164310 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.173880 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.200440 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlr2g\" (UniqueName: \"kubernetes.io/projected/9b025d86-6a2c-457b-a88d-b697dabc2d7b-kube-api-access-jlr2g\") pod \"dnsmasq-dns-67bdc55879-lkzxk\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.211027 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251505 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-logs\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251592 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251649 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-scripts\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251739 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251772 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79458\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-kube-api-access-79458\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251824 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-certs\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.251860 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.344131 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.344564 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.345327 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354029 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354101 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-scripts\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354142 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354195 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79458\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-kube-api-access-79458\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354233 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-certs\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354264 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354329 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-logs\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.354785 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-logs\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.364254 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.366329 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.367227 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-certs\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.370809 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-scripts\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.375959 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.379528 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.427810 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79458\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-kube-api-access-79458\") pod \"cloudkitty-api-0\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.457423 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g92c\" (UniqueName: \"kubernetes.io/projected/1e6ec8db-21ec-44ff-8a93-79273b776f47-kube-api-access-6g92c\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458188 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6ec8db-21ec-44ff-8a93-79273b776f47-logs\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458448 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6ec8db-21ec-44ff-8a93-79273b776f47-logs" (OuterVolumeSpecName: "logs") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458507 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-scripts\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458592 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data-custom\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458611 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458754 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6ec8db-21ec-44ff-8a93-79273b776f47-etc-machine-id\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.458837 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-combined-ca-bundle\") pod \"1e6ec8db-21ec-44ff-8a93-79273b776f47\" (UID: \"1e6ec8db-21ec-44ff-8a93-79273b776f47\") " Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.459509 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6ec8db-21ec-44ff-8a93-79273b776f47-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.468471 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e6ec8db-21ec-44ff-8a93-79273b776f47-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.471398 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-scripts" (OuterVolumeSpecName: "scripts") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.481814 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.499394 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6ec8db-21ec-44ff-8a93-79273b776f47-kube-api-access-6g92c" (OuterVolumeSpecName: "kube-api-access-6g92c") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "kube-api-access-6g92c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.499614 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.565331 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.565648 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.565739 4721 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e6ec8db-21ec-44ff-8a93-79273b776f47-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.565819 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g92c\" (UniqueName: \"kubernetes.io/projected/1e6ec8db-21ec-44ff-8a93-79273b776f47-kube-api-access-6g92c\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.579326 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.587219 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data" (OuterVolumeSpecName: "config-data") pod "1e6ec8db-21ec-44ff-8a93-79273b776f47" (UID: "1e6ec8db-21ec-44ff-8a93-79273b776f47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.605867 4721 generic.go:334] "Generic (PLEG): container finished" podID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerID="99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e" exitCode=0 Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.605999 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6ec8db-21ec-44ff-8a93-79273b776f47","Type":"ContainerDied","Data":"99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e"} Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.606038 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1e6ec8db-21ec-44ff-8a93-79273b776f47","Type":"ContainerDied","Data":"970c61dcd3ebdb275a566b8580f1b76b67762866821746c7738f127aff1e4ecf"} Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.606059 4721 scope.go:117] "RemoveContainer" containerID="99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.606279 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.628745 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.648900 4721 generic.go:334] "Generic (PLEG): container finished" podID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerID="27db63adec4a8e9d38421d1e8e87f7f08eadb03f90cf9d91b34eba5faaa80514" exitCode=0 Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.649239 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" event={"ID":"320cec56-f5f5-4a55-8592-b79f7e9c35b0","Type":"ContainerDied","Data":"27db63adec4a8e9d38421d1e8e87f7f08eadb03f90cf9d91b34eba5faaa80514"} Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.669700 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.669741 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6ec8db-21ec-44ff-8a93-79273b776f47-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.864726 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.915572 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:38 crc kubenswrapper[4721]: I0128 18:56:38.916978 4721 scope.go:117] "RemoveContainer" containerID="e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.017657 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:39 crc kubenswrapper[4721]: E0128 18:56:39.018601 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api-log" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.018709 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api-log" Jan 28 18:56:39 crc kubenswrapper[4721]: E0128 18:56:39.018825 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.018903 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.019293 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api-log" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.019417 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" containerName="cinder-api" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.021015 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.045697 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.049513 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.050019 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.049515 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084553 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5090535-3282-4e69-988d-be91fd8908a2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084595 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084620 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084649 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084670 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084777 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fn8\" (UniqueName: \"kubernetes.io/projected/a5090535-3282-4e69-988d-be91fd8908a2-kube-api-access-c2fn8\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-scripts\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084831 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5090535-3282-4e69-988d-be91fd8908a2-logs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.084858 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-config-data\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.128387 4721 scope.go:117] "RemoveContainer" containerID="99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e" Jan 28 18:56:39 crc kubenswrapper[4721]: E0128 18:56:39.133904 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e\": container with ID starting with 99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e not found: ID does not exist" containerID="99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.133962 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e"} err="failed to get container status \"99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e\": rpc error: code = NotFound desc = could not find container \"99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e\": container with ID starting with 99af7279b222155ef0049c54949a632f1a86434ee71d0769f22951c8b128426e not found: ID does not exist" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.133997 4721 scope.go:117] "RemoveContainer" containerID="e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970" Jan 28 18:56:39 crc kubenswrapper[4721]: E0128 18:56:39.141464 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970\": container with ID starting with e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970 not found: ID does not exist" containerID="e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.141790 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970"} err="failed to get container status \"e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970\": rpc error: code = NotFound desc = could not find container \"e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970\": container with ID starting with e7bcf213c92dfd423efc8b73c1b108a3f1d7db9dd4a2d3cc702d448cbe5ff970 not found: ID does not exist" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.186741 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5090535-3282-4e69-988d-be91fd8908a2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.187134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.187306 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.188271 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.188422 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.188882 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fn8\" (UniqueName: \"kubernetes.io/projected/a5090535-3282-4e69-988d-be91fd8908a2-kube-api-access-c2fn8\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.189017 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-scripts\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.189225 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5090535-3282-4e69-988d-be91fd8908a2-logs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.189391 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-config-data\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.192529 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5090535-3282-4e69-988d-be91fd8908a2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.200264 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5090535-3282-4e69-988d-be91fd8908a2-logs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.204962 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.206526 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.216633 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-scripts\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.231417 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-config-data\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.233733 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.235851 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5090535-3282-4e69-988d-be91fd8908a2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.241201 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fn8\" (UniqueName: \"kubernetes.io/projected/a5090535-3282-4e69-988d-be91fd8908a2-kube-api-access-c2fn8\") pod \"cinder-api-0\" (UID: \"a5090535-3282-4e69-988d-be91fd8908a2\") " pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.245207 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.267183 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:56:39 crc kubenswrapper[4721]: W0128 18:56:39.387360 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice/crio-ad8be0c89b5c5a1b28fabb5b8e46e4e750e1da91be300fa1f2974bbf65437ac1 WatchSource:0}: Error finding container ad8be0c89b5c5a1b28fabb5b8e46e4e750e1da91be300fa1f2974bbf65437ac1: Status 404 returned error can't find the container with id ad8be0c89b5c5a1b28fabb5b8e46e4e750e1da91be300fa1f2974bbf65437ac1 Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.399156 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-svc\") pod \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.399244 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-sb\") pod \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.399266 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvk6k\" (UniqueName: \"kubernetes.io/projected/320cec56-f5f5-4a55-8592-b79f7e9c35b0-kube-api-access-mvk6k\") pod \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.399383 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-nb\") pod \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.399481 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-config\") pod \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.399537 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-swift-storage-0\") pod \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\" (UID: \"320cec56-f5f5-4a55-8592-b79f7e9c35b0\") " Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.440014 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.444492 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320cec56-f5f5-4a55-8592-b79f7e9c35b0-kube-api-access-mvk6k" (OuterVolumeSpecName: "kube-api-access-mvk6k") pod "320cec56-f5f5-4a55-8592-b79f7e9c35b0" (UID: "320cec56-f5f5-4a55-8592-b79f7e9c35b0"). InnerVolumeSpecName "kube-api-access-mvk6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.521092 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvk6k\" (UniqueName: \"kubernetes.io/projected/320cec56-f5f5-4a55-8592-b79f7e9c35b0-kube-api-access-mvk6k\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.603343 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6ec8db-21ec-44ff-8a93-79273b776f47" path="/var/lib/kubelet/pods/1e6ec8db-21ec-44ff-8a93-79273b776f47/volumes" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.669588 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "320cec56-f5f5-4a55-8592-b79f7e9c35b0" (UID: "320cec56-f5f5-4a55-8592-b79f7e9c35b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.735112 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"cb2f69be-cd3d-44ef-80af-f0d4ac766305","Type":"ContainerStarted","Data":"ad8be0c89b5c5a1b28fabb5b8e46e4e750e1da91be300fa1f2974bbf65437ac1"} Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.737302 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.739685 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" event={"ID":"320cec56-f5f5-4a55-8592-b79f7e9c35b0","Type":"ContainerDied","Data":"9763f38d2beeeac976c635fa072523dd64bdb06a4c59e7233e2bf819a2b16f30"} Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.739773 4721 scope.go:117] "RemoveContainer" containerID="27db63adec4a8e9d38421d1e8e87f7f08eadb03f90cf9d91b34eba5faaa80514" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.739892 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4wcjf" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.777369 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-config" (OuterVolumeSpecName: "config") pod "320cec56-f5f5-4a55-8592-b79f7e9c35b0" (UID: "320cec56-f5f5-4a55-8592-b79f7e9c35b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.781833 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "320cec56-f5f5-4a55-8592-b79f7e9c35b0" (UID: "320cec56-f5f5-4a55-8592-b79f7e9c35b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.788960 4721 scope.go:117] "RemoveContainer" containerID="a3a2f2047ab7a85067112a07348bcafd7bb095cef436c0c9b80c00864c6ef22f" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.810293 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "320cec56-f5f5-4a55-8592-b79f7e9c35b0" (UID: "320cec56-f5f5-4a55-8592-b79f7e9c35b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.839504 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.839538 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.839551 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.874807 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "320cec56-f5f5-4a55-8592-b79f7e9c35b0" (UID: "320cec56-f5f5-4a55-8592-b79f7e9c35b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.942487 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/320cec56-f5f5-4a55-8592-b79f7e9c35b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:39 crc kubenswrapper[4721]: I0128 18:56:39.994081 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-lkzxk"] Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.068244 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-cfc4cd674-j5vfc" podUID="f8eb94ee-887b-48f2-808c-2b634928d62e" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.184:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.166926 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4wcjf"] Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.227708 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4wcjf"] Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.260718 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:40 crc kubenswrapper[4721]: W0128 18:56:40.354364 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcadefc7c_f4f0_49ee_b8a2_d45faadc12c1.slice/crio-b16feba738b810b39f42f29687fe3c96fab1c74a6e66d1ed0e20a03f393a3a98 WatchSource:0}: Error finding container b16feba738b810b39f42f29687fe3c96fab1c74a6e66d1ed0e20a03f393a3a98: Status 404 returned error can't find the container with id b16feba738b810b39f42f29687fe3c96fab1c74a6e66d1ed0e20a03f393a3a98 Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.466540 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.792091 4721 generic.go:334] "Generic (PLEG): container finished" podID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerID="ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa" exitCode=0 Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.793001 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" event={"ID":"9b025d86-6a2c-457b-a88d-b697dabc2d7b","Type":"ContainerDied","Data":"ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa"} Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.793058 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" event={"ID":"9b025d86-6a2c-457b-a88d-b697dabc2d7b","Type":"ContainerStarted","Data":"098780222b7d6da97b376126b343886d8d5b1d0569ea49c8eef10cadc9407b6c"} Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.803838 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.809688 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a5090535-3282-4e69-988d-be91fd8908a2","Type":"ContainerStarted","Data":"0be369cfb9c05e94e9477abac8a4ae2212806a67ba729830769636f256f86efc"} Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.811260 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1","Type":"ContainerStarted","Data":"b16feba738b810b39f42f29687fe3c96fab1c74a6e66d1ed0e20a03f393a3a98"} Jan 28 18:56:40 crc kubenswrapper[4721]: I0128 18:56:40.823462 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.185:8080/\": dial tcp 10.217.0.185:8080: connect: connection refused" Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.586403 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" path="/var/lib/kubelet/pods/320cec56-f5f5-4a55-8592-b79f7e9c35b0/volumes" Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.669613 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.670136 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.883220 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1","Type":"ContainerStarted","Data":"d5242ab7f619185b533121f225f18cd23affda603d15943a957f9a68330b5177"} Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.883617 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1","Type":"ContainerStarted","Data":"f09c724067361e2f02702fb73eff21122818d9c1d44d4ca6036d594bf04b9dcd"} Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.884275 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.916805 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" event={"ID":"9b025d86-6a2c-457b-a88d-b697dabc2d7b","Type":"ContainerStarted","Data":"ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f"} Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.918396 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.939345 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a5090535-3282-4e69-988d-be91fd8908a2","Type":"ContainerStarted","Data":"2e1cb6b51f82f60ad39fd37f9d77ab2ecb5b19d1b333c952702da056de11c886"} Jan 28 18:56:41 crc kubenswrapper[4721]: I0128 18:56:41.949903 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.949871827 podStartE2EDuration="3.949871827s" podCreationTimestamp="2026-01-28 18:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:41.916428521 +0000 UTC m=+1367.641734081" watchObservedRunningTime="2026-01-28 18:56:41.949871827 +0000 UTC m=+1367.675177387" Jan 28 18:56:42 crc kubenswrapper[4721]: I0128 18:56:42.006852 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" podStartSLOduration=5.006821481 podStartE2EDuration="5.006821481s" podCreationTimestamp="2026-01-28 18:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:41.947884373 +0000 UTC m=+1367.673189933" watchObservedRunningTime="2026-01-28 18:56:42.006821481 +0000 UTC m=+1367.732127041" Jan 28 18:56:42 crc kubenswrapper[4721]: I0128 18:56:42.418108 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:42 crc kubenswrapper[4721]: I0128 18:56:42.545705 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:42 crc kubenswrapper[4721]: I0128 18:56:42.593453 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:42 crc kubenswrapper[4721]: I0128 18:56:42.647195 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:56:42 crc kubenswrapper[4721]: I0128 18:56:42.930562 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.048778 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-cfc4cd674-j5vfc" podUID="f8eb94ee-887b-48f2-808c-2b634928d62e" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.184:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.223102 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7b5b4f6d96-q5gf8"] Jan 28 18:56:43 crc kubenswrapper[4721]: E0128 18:56:43.223658 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerName="init" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.223676 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerName="init" Jan 28 18:56:43 crc kubenswrapper[4721]: E0128 18:56:43.223709 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerName="dnsmasq-dns" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.223718 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerName="dnsmasq-dns" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.223967 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="320cec56-f5f5-4a55-8592-b79f7e9c35b0" containerName="dnsmasq-dns" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.225383 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.258581 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b5b4f6d96-q5gf8"] Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.324734 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-scripts\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.324798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-public-tls-certs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.324881 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-config-data\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.324916 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-combined-ca-bundle\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.324968 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r475t\" (UniqueName: \"kubernetes.io/projected/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-kube-api-access-r475t\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.325003 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-internal-tls-certs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.325029 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-logs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434016 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-scripts\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434456 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-public-tls-certs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434567 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-config-data\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434626 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-combined-ca-bundle\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434727 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r475t\" (UniqueName: \"kubernetes.io/projected/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-kube-api-access-r475t\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434795 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-internal-tls-certs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.434862 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-logs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.435499 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-logs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.455031 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.468933 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-config-data\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.477867 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-internal-tls-certs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.478055 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-public-tls-certs\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.478531 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r475t\" (UniqueName: \"kubernetes.io/projected/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-kube-api-access-r475t\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.485371 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-scripts\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.475116 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc6f4fc-8f67-4a04-83f7-551efe61e4fe-combined-ca-bundle\") pod \"placement-7b5b4f6d96-q5gf8\" (UID: \"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe\") " pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.568826 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:43 crc kubenswrapper[4721]: I0128 18:56:43.959329 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.046377 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api-log" containerID="cri-o://f09c724067361e2f02702fb73eff21122818d9c1d44d4ca6036d594bf04b9dcd" gracePeriod=30 Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.047538 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api" containerID="cri-o://d5242ab7f619185b533121f225f18cd23affda603d15943a957f9a68330b5177" gracePeriod=30 Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.047727 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a5090535-3282-4e69-988d-be91fd8908a2","Type":"ContainerStarted","Data":"e1318c8f80218c0b52b8bd0f8e30e8bfdcafc3761467fbf4bce04533390a9ed5"} Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.048057 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.093369 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-cfc4cd674-j5vfc" podUID="f8eb94ee-887b-48f2-808c-2b634928d62e" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.184:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.093867 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-cfc4cd674-j5vfc" podUID="f8eb94ee-887b-48f2-808c-2b634928d62e" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.184:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.113072 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.113048212 podStartE2EDuration="6.113048212s" podCreationTimestamp="2026-01-28 18:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:44.104279992 +0000 UTC m=+1369.829585572" watchObservedRunningTime="2026-01-28 18:56:44.113048212 +0000 UTC m=+1369.838353772" Jan 28 18:56:44 crc kubenswrapper[4721]: I0128 18:56:44.190956 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b5b4f6d96-q5gf8"] Jan 28 18:56:45 crc kubenswrapper[4721]: I0128 18:56:45.068465 4721 generic.go:334] "Generic (PLEG): container finished" podID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerID="d5242ab7f619185b533121f225f18cd23affda603d15943a957f9a68330b5177" exitCode=0 Jan 28 18:56:45 crc kubenswrapper[4721]: I0128 18:56:45.069912 4721 generic.go:334] "Generic (PLEG): container finished" podID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerID="f09c724067361e2f02702fb73eff21122818d9c1d44d4ca6036d594bf04b9dcd" exitCode=143 Jan 28 18:56:45 crc kubenswrapper[4721]: I0128 18:56:45.070245 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1","Type":"ContainerDied","Data":"d5242ab7f619185b533121f225f18cd23affda603d15943a957f9a68330b5177"} Jan 28 18:56:45 crc kubenswrapper[4721]: I0128 18:56:45.070355 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1","Type":"ContainerDied","Data":"f09c724067361e2f02702fb73eff21122818d9c1d44d4ca6036d594bf04b9dcd"} Jan 28 18:56:45 crc kubenswrapper[4721]: I0128 18:56:45.078351 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b5b4f6d96-q5gf8" event={"ID":"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe","Type":"ContainerStarted","Data":"6de87d2c5704802c9d10c0dc6a60661be9482f56e4f3e688adba96fb518e8f39"} Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.190882 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.304013 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.425644 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7fccf8d9d-jqxpt" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.478943 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79458\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-kube-api-access-79458\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572425 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-combined-ca-bundle\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572591 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data-custom\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572685 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-certs\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572765 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572848 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-logs\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.572887 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-scripts\") pod \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\" (UID: \"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1\") " Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.575512 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-logs" (OuterVolumeSpecName: "logs") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.582800 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-scripts" (OuterVolumeSpecName: "scripts") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.582812 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.584087 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-kube-api-access-79458" (OuterVolumeSpecName: "kube-api-access-79458") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "kube-api-access-79458". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.589327 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-certs" (OuterVolumeSpecName: "certs") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.623009 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data" (OuterVolumeSpecName: "config-data") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.625367 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" (UID: "cadefc7c-f4f0-49ee-b8a2-d45faadc12c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679674 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679720 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679729 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679738 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79458\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-kube-api-access-79458\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679748 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679757 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:46 crc kubenswrapper[4721]: I0128 18:56:46.679775 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.113244 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 18:56:47 crc kubenswrapper[4721]: E0128 18:56:47.113779 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.113799 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api" Jan 28 18:56:47 crc kubenswrapper[4721]: E0128 18:56:47.113821 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api-log" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.113830 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api-log" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.114060 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.114088 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" containerName="cloudkitty-api-log" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.115041 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.120992 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8pszd" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.121351 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.123870 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.128228 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.129263 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cadefc7c-f4f0-49ee-b8a2-d45faadc12c1","Type":"ContainerDied","Data":"b16feba738b810b39f42f29687fe3c96fab1c74a6e66d1ed0e20a03f393a3a98"} Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.129322 4721 scope.go:117] "RemoveContainer" containerID="d5242ab7f619185b533121f225f18cd23affda603d15943a957f9a68330b5177" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.129459 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.153203 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"cb2f69be-cd3d-44ef-80af-f0d4ac766305","Type":"ContainerStarted","Data":"fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228"} Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.167500 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="cinder-scheduler" containerID="cri-o://e9c945db0336ce5c630a82ac18e793d400662138c72e7bd7ebdfa73de14bec08" gracePeriod=30 Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.167622 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b5b4f6d96-q5gf8" event={"ID":"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe","Type":"ContainerStarted","Data":"66ce083783aca9e1463207f757cce734b1369011fcb43ba782cc85d58dde06d7"} Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.167691 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="probe" containerID="cri-o://629bcc53a14f80007a8e58b3c042d7d177f9eb13b91e41b555b1c7436e637172" gracePeriod=30 Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.193257 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvxq\" (UniqueName: \"kubernetes.io/projected/85f51b69-4069-4da4-895c-0f92ad51506c-kube-api-access-6nvxq\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.197566 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f51b69-4069-4da4-895c-0f92ad51506c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.197798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85f51b69-4069-4da4-895c-0f92ad51506c-openstack-config\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.197989 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85f51b69-4069-4da4-895c-0f92ad51506c-openstack-config-secret\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.218404 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.633836112 podStartE2EDuration="10.218381146s" podCreationTimestamp="2026-01-28 18:56:37 +0000 UTC" firstStartedPulling="2026-01-28 18:56:39.417590137 +0000 UTC m=+1365.142895697" lastFinishedPulling="2026-01-28 18:56:46.002135171 +0000 UTC m=+1371.727440731" observedRunningTime="2026-01-28 18:56:47.198544513 +0000 UTC m=+1372.923850073" watchObservedRunningTime="2026-01-28 18:56:47.218381146 +0000 UTC m=+1372.943686706" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.226841 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.237256 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.237789 4721 scope.go:117] "RemoveContainer" containerID="f09c724067361e2f02702fb73eff21122818d9c1d44d4ca6036d594bf04b9dcd" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.245128 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.266108 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.272274 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.276053 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.276398 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.276551 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.291446 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.300476 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85f51b69-4069-4da4-895c-0f92ad51506c-openstack-config-secret\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.300669 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nvxq\" (UniqueName: \"kubernetes.io/projected/85f51b69-4069-4da4-895c-0f92ad51506c-kube-api-access-6nvxq\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.300717 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f51b69-4069-4da4-895c-0f92ad51506c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.300813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85f51b69-4069-4da4-895c-0f92ad51506c-openstack-config\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.301716 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85f51b69-4069-4da4-895c-0f92ad51506c-openstack-config\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.351209 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85f51b69-4069-4da4-895c-0f92ad51506c-openstack-config-secret\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.351757 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f51b69-4069-4da4-895c-0f92ad51506c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.384492 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nvxq\" (UniqueName: \"kubernetes.io/projected/85f51b69-4069-4da4-895c-0f92ad51506c-kube-api-access-6nvxq\") pod \"openstackclient\" (UID: \"85f51b69-4069-4da4-895c-0f92ad51506c\") " pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417457 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417706 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d83a88-618a-4208-aaa8-e209c0d34b1d-logs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417781 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417947 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwv2\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-kube-api-access-zbwv2\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.417973 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-scripts\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.418047 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.451809 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522682 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d83a88-618a-4208-aaa8-e209c0d34b1d-logs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522753 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522774 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522793 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522828 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbwv2\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-kube-api-access-zbwv2\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522872 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-scripts\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522911 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.522985 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.525436 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d83a88-618a-4208-aaa8-e209c0d34b1d-logs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.531996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.532827 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.533225 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.535466 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-scripts\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.538535 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.540899 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.549502 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.549791 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbwv2\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-kube-api-access-zbwv2\") pod \"cloudkitty-api-0\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.556865 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadefc7c-f4f0-49ee-b8a2-d45faadc12c1" path="/var/lib/kubelet/pods/cadefc7c-f4f0-49ee-b8a2-d45faadc12c1/volumes" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.628039 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.652877 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 18:56:47 crc kubenswrapper[4721]: I0128 18:56:47.955505 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-cfc4cd674-j5vfc" Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.044562 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84bf7c754-8m5d5"] Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.044880 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" containerID="cri-o://bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff" gracePeriod=30 Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.045545 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84bf7c754-8m5d5" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" containerID="cri-o://c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18" gracePeriod=30 Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.202917 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.222546 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b5b4f6d96-q5gf8" event={"ID":"7bc6f4fc-8f67-4a04-83f7-551efe61e4fe","Type":"ContainerStarted","Data":"022b47868ece61324fdfbf5e52eeeb89de8e4ec7ca3756e76933cf2eef656489"} Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.223609 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.223647 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.236070 4721 generic.go:334] "Generic (PLEG): container finished" podID="15e79c89-d076-4174-b7d4-87295d74b71d" containerID="bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff" exitCode=143 Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.236236 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84bf7c754-8m5d5" event={"ID":"15e79c89-d076-4174-b7d4-87295d74b71d","Type":"ContainerDied","Data":"bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff"} Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.250603 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7b5b4f6d96-q5gf8" podStartSLOduration=5.250572453 podStartE2EDuration="5.250572453s" podCreationTimestamp="2026-01-28 18:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:48.248862059 +0000 UTC m=+1373.974167639" watchObservedRunningTime="2026-01-28 18:56:48.250572453 +0000 UTC m=+1373.975878013" Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.253494 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85f51b69-4069-4da4-895c-0f92ad51506c","Type":"ContainerStarted","Data":"775343abd64d659e8cde3eae8370d19a33af488f07a487d18c5b2370dd12c613"} Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.432286 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 18:56:48 crc kubenswrapper[4721]: W0128 18:56:48.462545 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73d83a88_618a_4208_aaa8_e209c0d34b1d.slice/crio-9f01753ea4d68bfdd9a83b588e32dd7197acf5914150aedd713831c20855cfbc WatchSource:0}: Error finding container 9f01753ea4d68bfdd9a83b588e32dd7197acf5914150aedd713831c20855cfbc: Status 404 returned error can't find the container with id 9f01753ea4d68bfdd9a83b588e32dd7197acf5914150aedd713831c20855cfbc Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.487354 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.597541 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4g9c"] Jan 28 18:56:48 crc kubenswrapper[4721]: I0128 18:56:48.610280 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerName="dnsmasq-dns" containerID="cri-o://4df671ac7e52a9f8bce8f04593ba56480faf8889a47763d7399ba878b92a30d7" gracePeriod=10 Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.376699 4721 generic.go:334] "Generic (PLEG): container finished" podID="29653552-40e5-4d60-9284-a92f22c88681" containerID="629bcc53a14f80007a8e58b3c042d7d177f9eb13b91e41b555b1c7436e637172" exitCode=0 Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.377102 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29653552-40e5-4d60-9284-a92f22c88681","Type":"ContainerDied","Data":"629bcc53a14f80007a8e58b3c042d7d177f9eb13b91e41b555b1c7436e637172"} Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.410489 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.418400 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"73d83a88-618a-4208-aaa8-e209c0d34b1d","Type":"ContainerStarted","Data":"2740c8bb07a2969fd701089a09fd1fe230cd0d121f6859930d10da8f91fe65c5"} Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.418445 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"73d83a88-618a-4208-aaa8-e209c0d34b1d","Type":"ContainerStarted","Data":"df25b2d57cc0161105ae6bcc96fc2e8c0455ecf5c000f5b78c47a2ffc805591e"} Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.418457 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"73d83a88-618a-4208-aaa8-e209c0d34b1d","Type":"ContainerStarted","Data":"9f01753ea4d68bfdd9a83b588e32dd7197acf5914150aedd713831c20855cfbc"} Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.419692 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.440529 4721 generic.go:334] "Generic (PLEG): container finished" podID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerID="4df671ac7e52a9f8bce8f04593ba56480faf8889a47763d7399ba878b92a30d7" exitCode=0 Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.441707 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.441920 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v4g9c" event={"ID":"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb","Type":"ContainerDied","Data":"4df671ac7e52a9f8bce8f04593ba56480faf8889a47763d7399ba878b92a30d7"} Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.441957 4721 scope.go:117] "RemoveContainer" containerID="4df671ac7e52a9f8bce8f04593ba56480faf8889a47763d7399ba878b92a30d7" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.442320 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="cb2f69be-cd3d-44ef-80af-f0d4ac766305" containerName="cloudkitty-proc" containerID="cri-o://fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228" gracePeriod=30 Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.472297 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.472268611 podStartE2EDuration="2.472268611s" podCreationTimestamp="2026-01-28 18:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:49.462055845 +0000 UTC m=+1375.187361405" watchObservedRunningTime="2026-01-28 18:56:49.472268611 +0000 UTC m=+1375.197574171" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.518284 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-swift-storage-0\") pod \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.518736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwmdj\" (UniqueName: \"kubernetes.io/projected/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-kube-api-access-kwmdj\") pod \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.518928 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-sb\") pod \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.518967 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-nb\") pod \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.519020 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-svc\") pod \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.519067 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-config\") pod \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\" (UID: \"adfa1a56-6e36-42b6-86e6-1cf51f6e49cb\") " Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.638187 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" (UID: "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.638380 4721 scope.go:117] "RemoveContainer" containerID="3eddc61c685a404ffae2dcd467a01e5d493fda0ccd3d751df6e1bbcf5e264670" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.645445 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.664708 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-kube-api-access-kwmdj" (OuterVolumeSpecName: "kube-api-access-kwmdj") pod "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" (UID: "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb"). InnerVolumeSpecName "kube-api-access-kwmdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.667710 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" (UID: "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.731938 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" (UID: "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.747243 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.747275 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.747287 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwmdj\" (UniqueName: \"kubernetes.io/projected/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-kube-api-access-kwmdj\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.764417 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-config" (OuterVolumeSpecName: "config") pod "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" (UID: "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.828555 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" (UID: "adfa1a56-6e36-42b6-86e6-1cf51f6e49cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.849904 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:49 crc kubenswrapper[4721]: I0128 18:56:49.849946 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:50 crc kubenswrapper[4721]: I0128 18:56:50.117268 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4g9c"] Jan 28 18:56:50 crc kubenswrapper[4721]: I0128 18:56:50.142199 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v4g9c"] Jan 28 18:56:51 crc kubenswrapper[4721]: I0128 18:56:51.546990 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" path="/var/lib/kubelet/pods/adfa1a56-6e36-42b6-86e6-1cf51f6e49cb/volumes" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.429511 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.527951 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js787\" (UniqueName: \"kubernetes.io/projected/15e79c89-d076-4174-b7d4-87295d74b71d-kube-api-access-js787\") pod \"15e79c89-d076-4174-b7d4-87295d74b71d\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.528034 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data\") pod \"15e79c89-d076-4174-b7d4-87295d74b71d\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.528088 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e79c89-d076-4174-b7d4-87295d74b71d-logs\") pod \"15e79c89-d076-4174-b7d4-87295d74b71d\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.528242 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-combined-ca-bundle\") pod \"15e79c89-d076-4174-b7d4-87295d74b71d\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.528422 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data-custom\") pod \"15e79c89-d076-4174-b7d4-87295d74b71d\" (UID: \"15e79c89-d076-4174-b7d4-87295d74b71d\") " Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.530989 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e79c89-d076-4174-b7d4-87295d74b71d-logs" (OuterVolumeSpecName: "logs") pod "15e79c89-d076-4174-b7d4-87295d74b71d" (UID: "15e79c89-d076-4174-b7d4-87295d74b71d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.540896 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e79c89-d076-4174-b7d4-87295d74b71d-kube-api-access-js787" (OuterVolumeSpecName: "kube-api-access-js787") pod "15e79c89-d076-4174-b7d4-87295d74b71d" (UID: "15e79c89-d076-4174-b7d4-87295d74b71d"). InnerVolumeSpecName "kube-api-access-js787". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.541014 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "15e79c89-d076-4174-b7d4-87295d74b71d" (UID: "15e79c89-d076-4174-b7d4-87295d74b71d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.542577 4721 generic.go:334] "Generic (PLEG): container finished" podID="29653552-40e5-4d60-9284-a92f22c88681" containerID="e9c945db0336ce5c630a82ac18e793d400662138c72e7bd7ebdfa73de14bec08" exitCode=0 Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.542707 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29653552-40e5-4d60-9284-a92f22c88681","Type":"ContainerDied","Data":"e9c945db0336ce5c630a82ac18e793d400662138c72e7bd7ebdfa73de14bec08"} Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.557438 4721 generic.go:334] "Generic (PLEG): container finished" podID="15e79c89-d076-4174-b7d4-87295d74b71d" containerID="c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18" exitCode=0 Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.557518 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84bf7c754-8m5d5" event={"ID":"15e79c89-d076-4174-b7d4-87295d74b71d","Type":"ContainerDied","Data":"c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18"} Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.557556 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84bf7c754-8m5d5" event={"ID":"15e79c89-d076-4174-b7d4-87295d74b71d","Type":"ContainerDied","Data":"52483d38a0e137efe241734b4208c788c740429b6d82f8739f9b34d33dd9ec84"} Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.557603 4721 scope.go:117] "RemoveContainer" containerID="c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.557800 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84bf7c754-8m5d5" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.564108 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b6d5f477b-md9n5" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.567350 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-b6d5f477b-md9n5" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.567393 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-b6d5f477b-md9n5" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.599428 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e79c89-d076-4174-b7d4-87295d74b71d" (UID: "15e79c89-d076-4174-b7d4-87295d74b71d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.633077 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.633115 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js787\" (UniqueName: \"kubernetes.io/projected/15e79c89-d076-4174-b7d4-87295d74b71d-kube-api-access-js787\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.633128 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e79c89-d076-4174-b7d4-87295d74b71d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.633156 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.660343 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data" (OuterVolumeSpecName: "config-data") pod "15e79c89-d076-4174-b7d4-87295d74b71d" (UID: "15e79c89-d076-4174-b7d4-87295d74b71d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.746056 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e79c89-d076-4174-b7d4-87295d74b71d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.778894 4721 scope.go:117] "RemoveContainer" containerID="bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.831351 4721 scope.go:117] "RemoveContainer" containerID="c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18" Jan 28 18:56:52 crc kubenswrapper[4721]: E0128 18:56:52.846749 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18\": container with ID starting with c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18 not found: ID does not exist" containerID="c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.846806 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18"} err="failed to get container status \"c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18\": rpc error: code = NotFound desc = could not find container \"c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18\": container with ID starting with c405dbf6afef0c49decdf9416983345e651a2e549be33edc83182725295a6f18 not found: ID does not exist" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.846833 4721 scope.go:117] "RemoveContainer" containerID="bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff" Jan 28 18:56:52 crc kubenswrapper[4721]: E0128 18:56:52.858361 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff\": container with ID starting with bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff not found: ID does not exist" containerID="bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff" Jan 28 18:56:52 crc kubenswrapper[4721]: I0128 18:56:52.858667 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff"} err="failed to get container status \"bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff\": rpc error: code = NotFound desc = could not find container \"bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff\": container with ID starting with bf050b864c7b9e690de7d51e872e6944e833bb5d3288d6960527d23f52c00aff not found: ID does not exist" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.003654 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84bf7c754-8m5d5"] Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.020977 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84bf7c754-8m5d5"] Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.155090 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273476 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data-custom\") pod \"29653552-40e5-4d60-9284-a92f22c88681\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273597 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29653552-40e5-4d60-9284-a92f22c88681-etc-machine-id\") pod \"29653552-40e5-4d60-9284-a92f22c88681\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273666 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bgc4\" (UniqueName: \"kubernetes.io/projected/29653552-40e5-4d60-9284-a92f22c88681-kube-api-access-2bgc4\") pod \"29653552-40e5-4d60-9284-a92f22c88681\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273696 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29653552-40e5-4d60-9284-a92f22c88681-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29653552-40e5-4d60-9284-a92f22c88681" (UID: "29653552-40e5-4d60-9284-a92f22c88681"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273713 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data\") pod \"29653552-40e5-4d60-9284-a92f22c88681\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273752 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-combined-ca-bundle\") pod \"29653552-40e5-4d60-9284-a92f22c88681\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.273862 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-scripts\") pod \"29653552-40e5-4d60-9284-a92f22c88681\" (UID: \"29653552-40e5-4d60-9284-a92f22c88681\") " Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.274697 4721 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29653552-40e5-4d60-9284-a92f22c88681-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.283363 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-scripts" (OuterVolumeSpecName: "scripts") pod "29653552-40e5-4d60-9284-a92f22c88681" (UID: "29653552-40e5-4d60-9284-a92f22c88681"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.288330 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29653552-40e5-4d60-9284-a92f22c88681" (UID: "29653552-40e5-4d60-9284-a92f22c88681"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.297008 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="a5090535-3282-4e69-988d-be91fd8908a2" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.191:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.319608 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29653552-40e5-4d60-9284-a92f22c88681-kube-api-access-2bgc4" (OuterVolumeSpecName: "kube-api-access-2bgc4") pod "29653552-40e5-4d60-9284-a92f22c88681" (UID: "29653552-40e5-4d60-9284-a92f22c88681"). InnerVolumeSpecName "kube-api-access-2bgc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.377513 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.377553 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bgc4\" (UniqueName: \"kubernetes.io/projected/29653552-40e5-4d60-9284-a92f22c88681-kube-api-access-2bgc4\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.377571 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.383887 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29653552-40e5-4d60-9284-a92f22c88681" (UID: "29653552-40e5-4d60-9284-a92f22c88681"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.460822 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data" (OuterVolumeSpecName: "config-data") pod "29653552-40e5-4d60-9284-a92f22c88681" (UID: "29653552-40e5-4d60-9284-a92f22c88681"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.479246 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.479286 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29653552-40e5-4d60-9284-a92f22c88681-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.550588 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" path="/var/lib/kubelet/pods/15e79c89-d076-4174-b7d4-87295d74b71d/volumes" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.584409 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.584439 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29653552-40e5-4d60-9284-a92f22c88681","Type":"ContainerDied","Data":"a464c2119b4462a9e5311541d6c1f8b31b74c42bbc44ca58fab6a6368aca29f7"} Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.584523 4721 scope.go:117] "RemoveContainer" containerID="629bcc53a14f80007a8e58b3c042d7d177f9eb13b91e41b555b1c7436e637172" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.618614 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.641050 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.647385 4721 scope.go:117] "RemoveContainer" containerID="e9c945db0336ce5c630a82ac18e793d400662138c72e7bd7ebdfa73de14bec08" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.658910 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:53 crc kubenswrapper[4721]: E0128 18:56:53.659488 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerName="dnsmasq-dns" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.659833 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerName="dnsmasq-dns" Jan 28 18:56:53 crc kubenswrapper[4721]: E0128 18:56:53.659857 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.659867 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" Jan 28 18:56:53 crc kubenswrapper[4721]: E0128 18:56:53.659881 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.659890 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" Jan 28 18:56:53 crc kubenswrapper[4721]: E0128 18:56:53.659925 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerName="init" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.659933 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerName="init" Jan 28 18:56:53 crc kubenswrapper[4721]: E0128 18:56:53.659954 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="cinder-scheduler" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.659962 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="cinder-scheduler" Jan 28 18:56:53 crc kubenswrapper[4721]: E0128 18:56:53.659975 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="probe" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.659985 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="probe" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.660266 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="cinder-scheduler" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.660303 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.660317 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfa1a56-6e36-42b6-86e6-1cf51f6e49cb" containerName="dnsmasq-dns" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.660336 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="29653552-40e5-4d60-9284-a92f22c88681" containerName="probe" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.660352 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e79c89-d076-4174-b7d4-87295d74b71d" containerName="barbican-api-log" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.673692 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.685336 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.689594 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.794408 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.794516 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-config-data\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.794734 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wht8b\" (UniqueName: \"kubernetes.io/projected/a3d49781-0039-466d-b00e-1d7f28598b88-kube-api-access-wht8b\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.794841 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-scripts\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.794889 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.794949 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d49781-0039-466d-b00e-1d7f28598b88-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.897776 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.898228 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-config-data\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.898507 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wht8b\" (UniqueName: \"kubernetes.io/projected/a3d49781-0039-466d-b00e-1d7f28598b88-kube-api-access-wht8b\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.898715 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-scripts\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.898819 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.898939 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d49781-0039-466d-b00e-1d7f28598b88-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.899211 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d49781-0039-466d-b00e-1d7f28598b88-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.910731 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.915404 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-config-data\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.917751 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-scripts\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.921897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d49781-0039-466d-b00e-1d7f28598b88-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:53 crc kubenswrapper[4721]: I0128 18:56:53.926952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wht8b\" (UniqueName: \"kubernetes.io/projected/a3d49781-0039-466d-b00e-1d7f28598b88-kube-api-access-wht8b\") pod \"cinder-scheduler-0\" (UID: \"a3d49781-0039-466d-b00e-1d7f28598b88\") " pod="openstack/cinder-scheduler-0" Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.011586 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.276457 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a5090535-3282-4e69-988d-be91fd8908a2" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.191:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.631388 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:56:54 crc kubenswrapper[4721]: W0128 18:56:54.636235 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3d49781_0039_466d_b00e_1d7f28598b88.slice/crio-691f874c6771143ed5c3920cf717365f977e6bbf67b242385cf558e2223f0a5b WatchSource:0}: Error finding container 691f874c6771143ed5c3920cf717365f977e6bbf67b242385cf558e2223f0a5b: Status 404 returned error can't find the container with id 691f874c6771143ed5c3920cf717365f977e6bbf67b242385cf558e2223f0a5b Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.804590 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-787c88cc7-8262p" Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.984589 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b6d5f477b-md9n5"] Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.984874 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b6d5f477b-md9n5" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-api" containerID="cri-o://85136f351f7316d90a281940791aedf2fcda0c293454509ecff435d6368579b7" gracePeriod=30 Jan 28 18:56:54 crc kubenswrapper[4721]: I0128 18:56:54.985768 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b6d5f477b-md9n5" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" containerID="cri-o://97de22f49da0c15672d79bca2d1dc8c0c67082833d1f3351a7216fbf4b417f7a" gracePeriod=30 Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.204499 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b6d5f477b-md9n5" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.179:9696/\": EOF" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.399497 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.504029 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-combined-ca-bundle\") pod \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.504134 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-scripts\") pod \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.504293 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data\") pod \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.504327 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data-custom\") pod \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.504380 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2njz\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-kube-api-access-d2njz\") pod \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.504455 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-certs\") pod \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\" (UID: \"cb2f69be-cd3d-44ef-80af-f0d4ac766305\") " Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.538978 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-kube-api-access-d2njz" (OuterVolumeSpecName: "kube-api-access-d2njz") pod "cb2f69be-cd3d-44ef-80af-f0d4ac766305" (UID: "cb2f69be-cd3d-44ef-80af-f0d4ac766305"). InnerVolumeSpecName "kube-api-access-d2njz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.539459 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-certs" (OuterVolumeSpecName: "certs") pod "cb2f69be-cd3d-44ef-80af-f0d4ac766305" (UID: "cb2f69be-cd3d-44ef-80af-f0d4ac766305"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.549633 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-scripts" (OuterVolumeSpecName: "scripts") pod "cb2f69be-cd3d-44ef-80af-f0d4ac766305" (UID: "cb2f69be-cd3d-44ef-80af-f0d4ac766305"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.555916 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cb2f69be-cd3d-44ef-80af-f0d4ac766305" (UID: "cb2f69be-cd3d-44ef-80af-f0d4ac766305"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.587743 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data" (OuterVolumeSpecName: "config-data") pod "cb2f69be-cd3d-44ef-80af-f0d4ac766305" (UID: "cb2f69be-cd3d-44ef-80af-f0d4ac766305"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.627415 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb2f69be-cd3d-44ef-80af-f0d4ac766305" (UID: "cb2f69be-cd3d-44ef-80af-f0d4ac766305"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.650192 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.650235 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.650252 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2njz\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-kube-api-access-d2njz\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.650271 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cb2f69be-cd3d-44ef-80af-f0d4ac766305-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.650284 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.650294 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2f69be-cd3d-44ef-80af-f0d4ac766305-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.653007 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29653552-40e5-4d60-9284-a92f22c88681" path="/var/lib/kubelet/pods/29653552-40e5-4d60-9284-a92f22c88681/volumes" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.737000 4721 generic.go:334] "Generic (PLEG): container finished" podID="cb2f69be-cd3d-44ef-80af-f0d4ac766305" containerID="fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228" exitCode=0 Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.737215 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"cb2f69be-cd3d-44ef-80af-f0d4ac766305","Type":"ContainerDied","Data":"fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228"} Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.737292 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"cb2f69be-cd3d-44ef-80af-f0d4ac766305","Type":"ContainerDied","Data":"ad8be0c89b5c5a1b28fabb5b8e46e4e750e1da91be300fa1f2974bbf65437ac1"} Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.737349 4721 scope.go:117] "RemoveContainer" containerID="fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.737722 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.744437 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d49781-0039-466d-b00e-1d7f28598b88","Type":"ContainerStarted","Data":"691f874c6771143ed5c3920cf717365f977e6bbf67b242385cf558e2223f0a5b"} Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.838249 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.868346 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.894122 4721 scope.go:117] "RemoveContainer" containerID="fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228" Jan 28 18:56:55 crc kubenswrapper[4721]: E0128 18:56:55.897847 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228\": container with ID starting with fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228 not found: ID does not exist" containerID="fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.897904 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228"} err="failed to get container status \"fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228\": rpc error: code = NotFound desc = could not find container \"fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228\": container with ID starting with fc97855be9f3322d44117263da8d55d7294cfb41dabcb8eb60f4df3eb7542228 not found: ID does not exist" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.904310 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:55 crc kubenswrapper[4721]: E0128 18:56:55.904987 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2f69be-cd3d-44ef-80af-f0d4ac766305" containerName="cloudkitty-proc" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.905010 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2f69be-cd3d-44ef-80af-f0d4ac766305" containerName="cloudkitty-proc" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.905251 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2f69be-cd3d-44ef-80af-f0d4ac766305" containerName="cloudkitty-proc" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.906299 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.909226 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.941942 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.973520 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.973636 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-certs\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.973711 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.974880 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.975026 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:55 crc kubenswrapper[4721]: I0128 18:56:55.975076 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k55n\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-kube-api-access-2k55n\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.078272 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.078360 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.078404 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k55n\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-kube-api-access-2k55n\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.078505 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.078538 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-certs\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.078591 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.085440 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-certs\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.087470 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.087647 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.098846 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.099847 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.115529 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k55n\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-kube-api-access-2k55n\") pod \"cloudkitty-proc-0\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.268116 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.810953 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d49781-0039-466d-b00e-1d7f28598b88","Type":"ContainerStarted","Data":"349bcb4a6e3fa7aa9d23b294b80d5b03f6dabf4216dd89dbe72d147a90c05c22"} Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.818420 4721 generic.go:334] "Generic (PLEG): container finished" podID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerID="97de22f49da0c15672d79bca2d1dc8c0c67082833d1f3351a7216fbf4b417f7a" exitCode=0 Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.818491 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b6d5f477b-md9n5" event={"ID":"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5","Type":"ContainerDied","Data":"97de22f49da0c15672d79bca2d1dc8c0c67082833d1f3351a7216fbf4b417f7a"} Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.943765 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 18:56:56 crc kubenswrapper[4721]: I0128 18:56:56.961845 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.546546 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2f69be-cd3d-44ef-80af-f0d4ac766305" path="/var/lib/kubelet/pods/cb2f69be-cd3d-44ef-80af-f0d4ac766305/volumes" Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.866875 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8f4cfc8a-e4d7-4579-b2cd-303abce60b03","Type":"ContainerStarted","Data":"1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9"} Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.867230 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8f4cfc8a-e4d7-4579-b2cd-303abce60b03","Type":"ContainerStarted","Data":"c36d8c2f060c52f18bdcda029d1cd79660047a05f936b784d45a9b942be53c1d"} Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.886741 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d49781-0039-466d-b00e-1d7f28598b88","Type":"ContainerStarted","Data":"5c0b1a0bedd97cfef097e480177faca273fce711f9c0cb7b8713240d97595160"} Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.898717 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.8986917290000003 podStartE2EDuration="2.898691729s" podCreationTimestamp="2026-01-28 18:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:57.888253606 +0000 UTC m=+1383.613559176" watchObservedRunningTime="2026-01-28 18:56:57.898691729 +0000 UTC m=+1383.623997289" Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.902389 4721 generic.go:334] "Generic (PLEG): container finished" podID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerID="85136f351f7316d90a281940791aedf2fcda0c293454509ecff435d6368579b7" exitCode=0 Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.902434 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b6d5f477b-md9n5" event={"ID":"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5","Type":"ContainerDied","Data":"85136f351f7316d90a281940791aedf2fcda0c293454509ecff435d6368579b7"} Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.930433 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.930408199 podStartE2EDuration="4.930408199s" podCreationTimestamp="2026-01-28 18:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:56:57.915103322 +0000 UTC m=+1383.640408882" watchObservedRunningTime="2026-01-28 18:56:57.930408199 +0000 UTC m=+1383.655713759" Jan 28 18:56:57 crc kubenswrapper[4721]: I0128 18:56:57.991941 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.072571 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-httpd-config\") pod \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.072800 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-config\") pod \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.072850 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-ovndb-tls-certs\") pod \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.072974 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-combined-ca-bundle\") pod \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.073059 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmq8q\" (UniqueName: \"kubernetes.io/projected/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-kube-api-access-qmq8q\") pod \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\" (UID: \"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5\") " Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.115425 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-kube-api-access-qmq8q" (OuterVolumeSpecName: "kube-api-access-qmq8q") pod "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" (UID: "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5"). InnerVolumeSpecName "kube-api-access-qmq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.115542 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" (UID: "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.176418 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmq8q\" (UniqueName: \"kubernetes.io/projected/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-kube-api-access-qmq8q\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.176455 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.315654 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" (UID: "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.343705 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-config" (OuterVolumeSpecName: "config") pod "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" (UID: "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.360557 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" (UID: "598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.392200 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.392266 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.392309 4721 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:56:58 crc kubenswrapper[4721]: E0128 18:56:58.535972 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.750284 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.925341 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b6d5f477b-md9n5" event={"ID":"598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5","Type":"ContainerDied","Data":"95110c4401de6c9acf96c83fd6166f89fdb43b1149a732f22b633265c72881b8"} Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.925411 4721 scope.go:117] "RemoveContainer" containerID="97de22f49da0c15672d79bca2d1dc8c0c67082833d1f3351a7216fbf4b417f7a" Jan 28 18:56:58 crc kubenswrapper[4721]: I0128 18:56:58.925657 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b6d5f477b-md9n5" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.000392 4721 scope.go:117] "RemoveContainer" containerID="85136f351f7316d90a281940791aedf2fcda0c293454509ecff435d6368579b7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.013944 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.018321 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b6d5f477b-md9n5"] Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.028788 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b6d5f477b-md9n5"] Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.366270 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6895f7fb8c-vmmw7"] Jan 28 18:56:59 crc kubenswrapper[4721]: E0128 18:56:59.367197 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.367223 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" Jan 28 18:56:59 crc kubenswrapper[4721]: E0128 18:56:59.367268 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-api" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.367278 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-api" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.367530 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-httpd" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.367561 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" containerName="neutron-api" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.370055 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.373326 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.373631 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.375715 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.388004 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6895f7fb8c-vmmw7"] Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420423 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-public-tls-certs\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420474 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/078d9149-2986-4e6e-a8f4-c7535613a91d-run-httpd\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420523 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvsxc\" (UniqueName: \"kubernetes.io/projected/078d9149-2986-4e6e-a8f4-c7535613a91d-kube-api-access-wvsxc\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420551 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-combined-ca-bundle\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420583 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-config-data\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420620 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-internal-tls-certs\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420668 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/078d9149-2986-4e6e-a8f4-c7535613a91d-etc-swift\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.420748 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/078d9149-2986-4e6e-a8f4-c7535613a91d-log-httpd\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.522755 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-public-tls-certs\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.522813 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/078d9149-2986-4e6e-a8f4-c7535613a91d-run-httpd\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.522854 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvsxc\" (UniqueName: \"kubernetes.io/projected/078d9149-2986-4e6e-a8f4-c7535613a91d-kube-api-access-wvsxc\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.522886 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-combined-ca-bundle\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.522918 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-config-data\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.522955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-internal-tls-certs\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.523014 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/078d9149-2986-4e6e-a8f4-c7535613a91d-etc-swift\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.523098 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/078d9149-2986-4e6e-a8f4-c7535613a91d-log-httpd\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.523878 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/078d9149-2986-4e6e-a8f4-c7535613a91d-run-httpd\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.528623 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/078d9149-2986-4e6e-a8f4-c7535613a91d-log-httpd\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.541325 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-config-data\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.545112 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-public-tls-certs\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.548122 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-internal-tls-certs\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.549361 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/078d9149-2986-4e6e-a8f4-c7535613a91d-etc-swift\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.558074 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d9149-2986-4e6e-a8f4-c7535613a91d-combined-ca-bundle\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.578127 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5" path="/var/lib/kubelet/pods/598a7e6f-da5f-4dc3-be56-0dc9b6b13ad5/volumes" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.590072 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvsxc\" (UniqueName: \"kubernetes.io/projected/078d9149-2986-4e6e-a8f4-c7535613a91d-kube-api-access-wvsxc\") pod \"swift-proxy-6895f7fb8c-vmmw7\" (UID: \"078d9149-2986-4e6e-a8f4-c7535613a91d\") " pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:56:59 crc kubenswrapper[4721]: I0128 18:56:59.691655 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:57:00 crc kubenswrapper[4721]: I0128 18:57:00.762397 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6895f7fb8c-vmmw7"] Jan 28 18:57:00 crc kubenswrapper[4721]: W0128 18:57:00.793465 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod078d9149_2986_4e6e_a8f4_c7535613a91d.slice/crio-ee4ffd3cb8cadd6f48c0cbf233a4bf65d48c11b681609fc915dc625a389f2042 WatchSource:0}: Error finding container ee4ffd3cb8cadd6f48c0cbf233a4bf65d48c11b681609fc915dc625a389f2042: Status 404 returned error can't find the container with id ee4ffd3cb8cadd6f48c0cbf233a4bf65d48c11b681609fc915dc625a389f2042 Jan 28 18:57:00 crc kubenswrapper[4721]: I0128 18:57:00.983804 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" event={"ID":"078d9149-2986-4e6e-a8f4-c7535613a91d","Type":"ContainerStarted","Data":"ee4ffd3cb8cadd6f48c0cbf233a4bf65d48c11b681609fc915dc625a389f2042"} Jan 28 18:57:01 crc kubenswrapper[4721]: I0128 18:57:01.711383 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:01 crc kubenswrapper[4721]: I0128 18:57:01.718836 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-central-agent" containerID="cri-o://ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808" gracePeriod=30 Jan 28 18:57:01 crc kubenswrapper[4721]: I0128 18:57:01.721915 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="proxy-httpd" containerID="cri-o://f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63" gracePeriod=30 Jan 28 18:57:01 crc kubenswrapper[4721]: I0128 18:57:01.722127 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-notification-agent" containerID="cri-o://be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856" gracePeriod=30 Jan 28 18:57:01 crc kubenswrapper[4721]: I0128 18:57:01.722213 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="sg-core" containerID="cri-o://abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801" gracePeriod=30 Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.006542 4721 generic.go:334] "Generic (PLEG): container finished" podID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerID="f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63" exitCode=0 Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.006874 4721 generic.go:334] "Generic (PLEG): container finished" podID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerID="abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801" exitCode=2 Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.006608 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerDied","Data":"f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63"} Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.006973 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerDied","Data":"abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801"} Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.017365 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" event={"ID":"078d9149-2986-4e6e-a8f4-c7535613a91d","Type":"ContainerStarted","Data":"84e73e783154135df813dd3ae831244cd444e6aadaf27bf9c4863c5a910afe50"} Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.017440 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" event={"ID":"078d9149-2986-4e6e-a8f4-c7535613a91d","Type":"ContainerStarted","Data":"d90fb8d55b2de755f83f5ac3416e4d75af84ff8ea166c7a39284b243470105bc"} Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.017757 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.017827 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:57:02 crc kubenswrapper[4721]: I0128 18:57:02.041901 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" podStartSLOduration=3.04187929 podStartE2EDuration="3.04187929s" podCreationTimestamp="2026-01-28 18:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:57:02.0399757 +0000 UTC m=+1387.765281260" watchObservedRunningTime="2026-01-28 18:57:02.04187929 +0000 UTC m=+1387.767184850" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.032781 4721 generic.go:334] "Generic (PLEG): container finished" podID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerID="ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808" exitCode=0 Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.032928 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerDied","Data":"ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808"} Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.657185 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.782734 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-sg-core-conf-yaml\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.783054 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-config-data\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.783193 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-combined-ca-bundle\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.783256 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-run-httpd\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.783277 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-scripts\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.783309 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-log-httpd\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.783529 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kv4v\" (UniqueName: \"kubernetes.io/projected/506fcc96-87e5-4718-82bd-7ae3c4919ff5-kube-api-access-7kv4v\") pod \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\" (UID: \"506fcc96-87e5-4718-82bd-7ae3c4919ff5\") " Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.793297 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.793450 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.803401 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506fcc96-87e5-4718-82bd-7ae3c4919ff5-kube-api-access-7kv4v" (OuterVolumeSpecName: "kube-api-access-7kv4v") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "kube-api-access-7kv4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.820846 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-scripts" (OuterVolumeSpecName: "scripts") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.865486 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.891036 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.891088 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.891101 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506fcc96-87e5-4718-82bd-7ae3c4919ff5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.891113 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kv4v\" (UniqueName: \"kubernetes.io/projected/506fcc96-87e5-4718-82bd-7ae3c4919ff5-kube-api-access-7kv4v\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.891125 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.929138 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.981996 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-config-data" (OuterVolumeSpecName: "config-data") pod "506fcc96-87e5-4718-82bd-7ae3c4919ff5" (UID: "506fcc96-87e5-4718-82bd-7ae3c4919ff5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.993264 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:03 crc kubenswrapper[4721]: I0128 18:57:03.993291 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506fcc96-87e5-4718-82bd-7ae3c4919ff5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.052602 4721 generic.go:334] "Generic (PLEG): container finished" podID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerID="be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856" exitCode=0 Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.052661 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerDied","Data":"be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856"} Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.052701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506fcc96-87e5-4718-82bd-7ae3c4919ff5","Type":"ContainerDied","Data":"f80b8eab457a7be4be2f649eb3f67b823608cfc64faffae2e5020ebfe2e65201"} Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.052726 4721 scope.go:117] "RemoveContainer" containerID="f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.052835 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.149250 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.170843 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.177875 4721 scope.go:117] "RemoveContainer" containerID="abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.205389 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.205892 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-central-agent" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.205910 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-central-agent" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.205921 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="proxy-httpd" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.205928 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="proxy-httpd" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.205953 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-notification-agent" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.205960 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-notification-agent" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.205982 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="sg-core" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.205988 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="sg-core" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.206252 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="sg-core" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.206272 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-central-agent" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.206289 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="ceilometer-notification-agent" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.206300 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" containerName="proxy-httpd" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.210995 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.232663 4721 scope.go:117] "RemoveContainer" containerID="be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.259971 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.268106 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.268908 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.299608 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.299663 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-988jt\" (UniqueName: \"kubernetes.io/projected/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-kube-api-access-988jt\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.299702 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.299759 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-log-httpd\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.299798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-config-data\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.299923 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-scripts\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.300043 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-run-httpd\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.338382 4721 scope.go:117] "RemoveContainer" containerID="ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.365426 4721 scope.go:117] "RemoveContainer" containerID="f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.366157 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63\": container with ID starting with f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63 not found: ID does not exist" containerID="f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.366228 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63"} err="failed to get container status \"f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63\": rpc error: code = NotFound desc = could not find container \"f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63\": container with ID starting with f79cfc617ae7daaa34901eda520de9fb2b97517065120855411013e0bd9d6d63 not found: ID does not exist" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.366259 4721 scope.go:117] "RemoveContainer" containerID="abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.371299 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801\": container with ID starting with abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801 not found: ID does not exist" containerID="abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.371357 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801"} err="failed to get container status \"abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801\": rpc error: code = NotFound desc = could not find container \"abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801\": container with ID starting with abf850f4ae22c9fec3f4c282ab4bd56934eebcea0df9178d58052ccafdc38801 not found: ID does not exist" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.371388 4721 scope.go:117] "RemoveContainer" containerID="be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.376345 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856\": container with ID starting with be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856 not found: ID does not exist" containerID="be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.376406 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856"} err="failed to get container status \"be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856\": rpc error: code = NotFound desc = could not find container \"be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856\": container with ID starting with be9e1314767a57461e84cd26110b3ec5b09af4d8980f592fe0bc9973cf149856 not found: ID does not exist" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.376442 4721 scope.go:117] "RemoveContainer" containerID="ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808" Jan 28 18:57:04 crc kubenswrapper[4721]: E0128 18:57:04.381363 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808\": container with ID starting with ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808 not found: ID does not exist" containerID="ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.381417 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808"} err="failed to get container status \"ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808\": rpc error: code = NotFound desc = could not find container \"ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808\": container with ID starting with ec7392092be0b4d7058afaaf2ccf94295382c3f46b771108433f33cec8eb6808 not found: ID does not exist" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402011 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-scripts\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-run-httpd\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402192 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402213 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-988jt\" (UniqueName: \"kubernetes.io/projected/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-kube-api-access-988jt\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402237 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402291 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-log-httpd\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.402328 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-config-data\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.407855 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-config-data\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.413078 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-log-httpd\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.420369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-run-httpd\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.424697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.434064 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.444636 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-scripts\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.445966 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-988jt\" (UniqueName: \"kubernetes.io/projected/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-kube-api-access-988jt\") pod \"ceilometer-0\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.565527 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:04 crc kubenswrapper[4721]: I0128 18:57:04.767121 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 18:57:05 crc kubenswrapper[4721]: I0128 18:57:05.180603 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:05 crc kubenswrapper[4721]: I0128 18:57:05.543775 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506fcc96-87e5-4718-82bd-7ae3c4919ff5" path="/var/lib/kubelet/pods/506fcc96-87e5-4718-82bd-7ae3c4919ff5/volumes" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.633403 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.633978 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-log" containerID="cri-o://f132365db32cb07f0b65459435bcf0a76c2b12f00abda76f77d0e25e4c241c69" gracePeriod=30 Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.634156 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-httpd" containerID="cri-o://b0e18ae764a18f24a821f96bb6325b0cfafe8c25620954220b490048d7b70276" gracePeriod=30 Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.696227 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xlhjz"] Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.698120 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.732242 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xlhjz"] Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.806893 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-operator-scripts\") pod \"nova-api-db-create-xlhjz\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.807231 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clsl\" (UniqueName: \"kubernetes.io/projected/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-kube-api-access-5clsl\") pod \"nova-api-db-create-xlhjz\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.878821 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6mx9s"] Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.880631 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.923655 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-operator-scripts\") pod \"nova-api-db-create-xlhjz\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.923715 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clsl\" (UniqueName: \"kubernetes.io/projected/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-kube-api-access-5clsl\") pod \"nova-api-db-create-xlhjz\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:07 crc kubenswrapper[4721]: I0128 18:57:07.924936 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-operator-scripts\") pod \"nova-api-db-create-xlhjz\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.031070 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xds47\" (UniqueName: \"kubernetes.io/projected/0c518e64-69b5-4360-a219-407693412130-kube-api-access-xds47\") pod \"nova-cell0-db-create-6mx9s\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.032278 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c518e64-69b5-4360-a219-407693412130-operator-scripts\") pod \"nova-cell0-db-create-6mx9s\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.060895 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clsl\" (UniqueName: \"kubernetes.io/projected/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-kube-api-access-5clsl\") pod \"nova-api-db-create-xlhjz\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.070806 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6mx9s"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.092303 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mpszx"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.093822 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.102949 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.117534 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mpszx"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.134423 4721 generic.go:334] "Generic (PLEG): container finished" podID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerID="f132365db32cb07f0b65459435bcf0a76c2b12f00abda76f77d0e25e4c241c69" exitCode=143 Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.134514 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd","Type":"ContainerDied","Data":"f132365db32cb07f0b65459435bcf0a76c2b12f00abda76f77d0e25e4c241c69"} Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.137264 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xds47\" (UniqueName: \"kubernetes.io/projected/0c518e64-69b5-4360-a219-407693412130-kube-api-access-xds47\") pod \"nova-cell0-db-create-6mx9s\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.137527 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c518e64-69b5-4360-a219-407693412130-operator-scripts\") pod \"nova-cell0-db-create-6mx9s\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.139112 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c518e64-69b5-4360-a219-407693412130-operator-scripts\") pod \"nova-cell0-db-create-6mx9s\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.161040 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-26e2-account-create-update-lb8jh"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.162399 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xds47\" (UniqueName: \"kubernetes.io/projected/0c518e64-69b5-4360-a219-407693412130-kube-api-access-xds47\") pod \"nova-cell0-db-create-6mx9s\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.175539 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-26e2-account-create-update-lb8jh"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.175975 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.178520 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.229549 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.241543 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-operator-scripts\") pod \"nova-cell1-db-create-mpszx\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.241651 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnqvd\" (UniqueName: \"kubernetes.io/projected/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-kube-api-access-tnqvd\") pod \"nova-cell1-db-create-mpszx\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.241740 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cr8\" (UniqueName: \"kubernetes.io/projected/866bb191-d801-4191-b725-52648c9d38bf-kube-api-access-w2cr8\") pod \"nova-api-26e2-account-create-update-lb8jh\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.241797 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866bb191-d801-4191-b725-52648c9d38bf-operator-scripts\") pod \"nova-api-26e2-account-create-update-lb8jh\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.279250 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5b4d-account-create-update-8nt6r"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.281080 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.285589 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.309262 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5b4d-account-create-update-8nt6r"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.346878 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-operator-scripts\") pod \"nova-cell1-db-create-mpszx\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.346969 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnqvd\" (UniqueName: \"kubernetes.io/projected/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-kube-api-access-tnqvd\") pod \"nova-cell1-db-create-mpszx\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.347028 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cr8\" (UniqueName: \"kubernetes.io/projected/866bb191-d801-4191-b725-52648c9d38bf-kube-api-access-w2cr8\") pod \"nova-api-26e2-account-create-update-lb8jh\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.347078 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866bb191-d801-4191-b725-52648c9d38bf-operator-scripts\") pod \"nova-api-26e2-account-create-update-lb8jh\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.347108 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe76996-48bb-4656-8ce3-ac8098700636-operator-scripts\") pod \"nova-cell0-5b4d-account-create-update-8nt6r\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.347151 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2jtx\" (UniqueName: \"kubernetes.io/projected/9fe76996-48bb-4656-8ce3-ac8098700636-kube-api-access-q2jtx\") pod \"nova-cell0-5b4d-account-create-update-8nt6r\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.347831 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-operator-scripts\") pod \"nova-cell1-db-create-mpszx\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.348450 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866bb191-d801-4191-b725-52648c9d38bf-operator-scripts\") pod \"nova-api-26e2-account-create-update-lb8jh\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.364700 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnqvd\" (UniqueName: \"kubernetes.io/projected/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-kube-api-access-tnqvd\") pod \"nova-cell1-db-create-mpszx\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.379819 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cr8\" (UniqueName: \"kubernetes.io/projected/866bb191-d801-4191-b725-52648c9d38bf-kube-api-access-w2cr8\") pod \"nova-api-26e2-account-create-update-lb8jh\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.445933 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.448652 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2jtx\" (UniqueName: \"kubernetes.io/projected/9fe76996-48bb-4656-8ce3-ac8098700636-kube-api-access-q2jtx\") pod \"nova-cell0-5b4d-account-create-update-8nt6r\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.448855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe76996-48bb-4656-8ce3-ac8098700636-operator-scripts\") pod \"nova-cell0-5b4d-account-create-update-8nt6r\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.449557 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe76996-48bb-4656-8ce3-ac8098700636-operator-scripts\") pod \"nova-cell0-5b4d-account-create-update-8nt6r\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.474615 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2jtx\" (UniqueName: \"kubernetes.io/projected/9fe76996-48bb-4656-8ce3-ac8098700636-kube-api-access-q2jtx\") pod \"nova-cell0-5b4d-account-create-update-8nt6r\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.488014 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6060-account-create-update-6nn4d"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.489730 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.493585 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.504244 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6060-account-create-update-6nn4d"] Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.551605 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pht9d\" (UniqueName: \"kubernetes.io/projected/f49e85fc-9126-4151-980f-56517e1752c1-kube-api-access-pht9d\") pod \"nova-cell1-6060-account-create-update-6nn4d\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.551876 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f49e85fc-9126-4151-980f-56517e1752c1-operator-scripts\") pod \"nova-cell1-6060-account-create-update-6nn4d\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.556222 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.606585 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.654733 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pht9d\" (UniqueName: \"kubernetes.io/projected/f49e85fc-9126-4151-980f-56517e1752c1-kube-api-access-pht9d\") pod \"nova-cell1-6060-account-create-update-6nn4d\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.654910 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f49e85fc-9126-4151-980f-56517e1752c1-operator-scripts\") pod \"nova-cell1-6060-account-create-update-6nn4d\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.655647 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f49e85fc-9126-4151-980f-56517e1752c1-operator-scripts\") pod \"nova-cell1-6060-account-create-update-6nn4d\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.676152 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pht9d\" (UniqueName: \"kubernetes.io/projected/f49e85fc-9126-4151-980f-56517e1752c1-kube-api-access-pht9d\") pod \"nova-cell1-6060-account-create-update-6nn4d\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:08 crc kubenswrapper[4721]: E0128 18:57:08.868458 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:57:08 crc kubenswrapper[4721]: I0128 18:57:08.875245 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:09 crc kubenswrapper[4721]: I0128 18:57:09.701585 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:57:09 crc kubenswrapper[4721]: I0128 18:57:09.707502 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" Jan 28 18:57:12 crc kubenswrapper[4721]: I0128 18:57:12.308344 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:57:12 crc kubenswrapper[4721]: I0128 18:57:12.309283 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-log" containerID="cri-o://d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741" gracePeriod=30 Jan 28 18:57:12 crc kubenswrapper[4721]: I0128 18:57:12.309582 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-httpd" containerID="cri-o://ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276" gracePeriod=30 Jan 28 18:57:12 crc kubenswrapper[4721]: I0128 18:57:12.633901 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:12 crc kubenswrapper[4721]: W0128 18:57:12.932559 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod807b9edb_ebc0_4d76_87fb_e4c6ff5cd323.slice/crio-f0a0a87a711843203613f9029b2657e0104287587a77dcda939d8838fef0f727 WatchSource:0}: Error finding container f0a0a87a711843203613f9029b2657e0104287587a77dcda939d8838fef0f727: Status 404 returned error can't find the container with id f0a0a87a711843203613f9029b2657e0104287587a77dcda939d8838fef0f727 Jan 28 18:57:13 crc kubenswrapper[4721]: I0128 18:57:13.235192 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerStarted","Data":"f0a0a87a711843203613f9029b2657e0104287587a77dcda939d8838fef0f727"} Jan 28 18:57:13 crc kubenswrapper[4721]: I0128 18:57:13.240046 4721 generic.go:334] "Generic (PLEG): container finished" podID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerID="b0e18ae764a18f24a821f96bb6325b0cfafe8c25620954220b490048d7b70276" exitCode=0 Jan 28 18:57:13 crc kubenswrapper[4721]: I0128 18:57:13.240119 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd","Type":"ContainerDied","Data":"b0e18ae764a18f24a821f96bb6325b0cfafe8c25620954220b490048d7b70276"} Jan 28 18:57:13 crc kubenswrapper[4721]: I0128 18:57:13.267216 4721 generic.go:334] "Generic (PLEG): container finished" podID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerID="d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741" exitCode=143 Jan 28 18:57:13 crc kubenswrapper[4721]: I0128 18:57:13.267300 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12589ff0-ab4c-4a16-b5bd-7cd433a85c86","Type":"ContainerDied","Data":"d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741"} Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.137446 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.216087 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6mx9s"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238308 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-logs\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238381 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-combined-ca-bundle\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238729 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238756 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-scripts\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238777 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-public-tls-certs\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238908 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-config-data\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238932 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bctlc\" (UniqueName: \"kubernetes.io/projected/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-kube-api-access-bctlc\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.238962 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-httpd-run\") pod \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\" (UID: \"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd\") " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.239869 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.240417 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-logs" (OuterVolumeSpecName: "logs") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.250137 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-scripts" (OuterVolumeSpecName: "scripts") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.260423 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-kube-api-access-bctlc" (OuterVolumeSpecName: "kube-api-access-bctlc") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "kube-api-access-bctlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.303046 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerStarted","Data":"ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27"} Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.316551 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb" (OuterVolumeSpecName: "glance") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.323097 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.323412 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85f51b69-4069-4da4-895c-0f92ad51506c","Type":"ContainerStarted","Data":"f1e007ca1b38a96d35da6db8672c23ba9142127b8d8cc033a235d26af609c48e"} Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.349204 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bctlc\" (UniqueName: \"kubernetes.io/projected/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-kube-api-access-bctlc\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.349240 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.349266 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.349275 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.349301 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") on node \"crc\" " Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.349312 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.356307 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6mx9s" event={"ID":"0c518e64-69b5-4360-a219-407693412130","Type":"ContainerStarted","Data":"e679b98bc5c5dd6d7e28aaf64fbb69db8ad07bcaa98a171267a47820942ef36f"} Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.365672 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.400140413 podStartE2EDuration="27.365645093s" podCreationTimestamp="2026-01-28 18:56:47 +0000 UTC" firstStartedPulling="2026-01-28 18:56:48.155528834 +0000 UTC m=+1373.880834394" lastFinishedPulling="2026-01-28 18:57:13.121033514 +0000 UTC m=+1398.846339074" observedRunningTime="2026-01-28 18:57:14.358503775 +0000 UTC m=+1400.083809335" watchObservedRunningTime="2026-01-28 18:57:14.365645093 +0000 UTC m=+1400.090950653" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.371354 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa8caeb1-7fd6-4493-afba-149f6ad5cfcd","Type":"ContainerDied","Data":"ee83c9c5487709a65c7ba20b834c50dbce1f21f41300852f1c0dd07d7bbca8d3"} Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.371411 4721 scope.go:117] "RemoveContainer" containerID="b0e18ae764a18f24a821f96bb6325b0cfafe8c25620954220b490048d7b70276" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.371563 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.376219 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.422747 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5b4d-account-create-update-8nt6r"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.431718 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.432293 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb") on node "crc" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.445375 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-config-data" (OuterVolumeSpecName: "config-data") pod "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" (UID: "fa8caeb1-7fd6-4493-afba-149f6ad5cfcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.451852 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.451888 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.451908 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.484583 4721 scope.go:117] "RemoveContainer" containerID="f132365db32cb07f0b65459435bcf0a76c2b12f00abda76f77d0e25e4c241c69" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.717334 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mpszx"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.760785 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-26e2-account-create-update-lb8jh"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.790325 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.821250 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.860578 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xlhjz"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.879698 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6060-account-create-update-6nn4d"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.901253 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:57:14 crc kubenswrapper[4721]: E0128 18:57:14.901813 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-log" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.901830 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-log" Jan 28 18:57:14 crc kubenswrapper[4721]: E0128 18:57:14.901870 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-httpd" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.901877 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-httpd" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.902084 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-httpd" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.902109 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" containerName="glance-log" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.903410 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.908265 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.909201 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.913456 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.970810 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hkmf\" (UniqueName: \"kubernetes.io/projected/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-kube-api-access-5hkmf\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.970886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.970921 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.970982 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-logs\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.971039 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.971226 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.971363 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:14 crc kubenswrapper[4721]: I0128 18:57:14.971443 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.073060 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.073148 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hkmf\" (UniqueName: \"kubernetes.io/projected/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-kube-api-access-5hkmf\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.073279 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.073308 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.073838 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.074447 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-logs\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.074520 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.074682 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.074720 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.078119 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-logs\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.080011 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.080076 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cbf5faf63d16a5d12a5e9b11b66b2cf989de626a136bdd39a47c0348964ea03b/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.084502 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-config-data\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.085913 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.086472 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-scripts\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.098003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.105887 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hkmf\" (UniqueName: \"kubernetes.io/projected/08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d-kube-api-access-5hkmf\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.211634 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dada3fff-f4a9-4795-a6e6-d294171ec4bb\") pod \"glance-default-external-api-0\" (UID: \"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d\") " pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.239543 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b9kc7"] Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.241855 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.275624 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9kc7"] Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.281512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-utilities\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.281954 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-catalog-content\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.282018 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44k8k\" (UniqueName: \"kubernetes.io/projected/653fc683-6178-446d-9cf2-4ae9e3e0029e-kube-api-access-44k8k\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.424024 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-catalog-content\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.424787 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44k8k\" (UniqueName: \"kubernetes.io/projected/653fc683-6178-446d-9cf2-4ae9e3e0029e-kube-api-access-44k8k\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.441085 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-utilities\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.433104 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-catalog-content\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.432404 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.442495 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-utilities\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.446879 4721 generic.go:334] "Generic (PLEG): container finished" podID="0c518e64-69b5-4360-a219-407693412130" containerID="b445bc2491348a67304e8419dbea9a9ee5a3764ff161fd483a703bc1ebe6f122" exitCode=0 Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.447036 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6mx9s" event={"ID":"0c518e64-69b5-4360-a219-407693412130","Type":"ContainerDied","Data":"b445bc2491348a67304e8419dbea9a9ee5a3764ff161fd483a703bc1ebe6f122"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.449143 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" event={"ID":"f49e85fc-9126-4151-980f-56517e1752c1","Type":"ContainerStarted","Data":"85eb8ef4f2ddc6533d5ce0d7f926fcb630c8c6eb078353ec573ff7646faf65b2"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.462261 4721 generic.go:334] "Generic (PLEG): container finished" podID="9fe76996-48bb-4656-8ce3-ac8098700636" containerID="1335bd2be285e63cf4500de7e09d4d6f7b0ac2396bb6a2229984b8ca5236b6a3" exitCode=0 Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.462399 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" event={"ID":"9fe76996-48bb-4656-8ce3-ac8098700636","Type":"ContainerDied","Data":"1335bd2be285e63cf4500de7e09d4d6f7b0ac2396bb6a2229984b8ca5236b6a3"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.462430 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" event={"ID":"9fe76996-48bb-4656-8ce3-ac8098700636","Type":"ContainerStarted","Data":"995799b57b11d7d6046c825418d9b2558d65ccdf5b10d6f487181ed3b90cd635"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.468496 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44k8k\" (UniqueName: \"kubernetes.io/projected/653fc683-6178-446d-9cf2-4ae9e3e0029e-kube-api-access-44k8k\") pod \"redhat-operators-b9kc7\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.479620 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mpszx" event={"ID":"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4","Type":"ContainerStarted","Data":"bd7625ab8d8562d57fde5332b878bb21f2d2a56eec4d0465a4db58ae71e3ddd1"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.493723 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-26e2-account-create-update-lb8jh" event={"ID":"866bb191-d801-4191-b725-52648c9d38bf","Type":"ContainerStarted","Data":"9462b735a95b7a04595eaad9061723f6c923ad72124798d342e7333e2aabfbd9"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.507787 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xlhjz" event={"ID":"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9","Type":"ContainerStarted","Data":"24fbb430019f3a00685e2b6dd0a15e9732969e4dacfee6f89ecbe0b3c79609f0"} Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.575950 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8caeb1-7fd6-4493-afba-149f6ad5cfcd" path="/var/lib/kubelet/pods/fa8caeb1-7fd6-4493-afba-149f6ad5cfcd/volumes" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.803348 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:15 crc kubenswrapper[4721]: I0128 18:57:15.863635 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.090251 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7b5b4f6d96-q5gf8" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.252006 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-649bf84c5b-p55hh"] Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.252491 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-649bf84c5b-p55hh" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-api" containerID="cri-o://19fac1308ae337004fdf3cfda1dfe901ebfa56b69b065d1dc73b4ebce61bd354" gracePeriod=30 Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.252690 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-649bf84c5b-p55hh" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-log" containerID="cri-o://7d3647343ea1bb010bb6f756bbe8c043bb3ef2a9dd83b66f0a3cedfcc37239cf" gracePeriod=30 Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.395247 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.566059 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerStarted","Data":"225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.580260 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" event={"ID":"f49e85fc-9126-4151-980f-56517e1752c1","Type":"ContainerStarted","Data":"1f98430a5afea1fb88ac875610693f286a0dfea7a93072492ab2b95a8d1c1b91"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.583652 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.585104 4721 generic.go:334] "Generic (PLEG): container finished" podID="1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" containerID="597c8ff5bbfa741fc36a77a13a481561fd9d1f9c1b4b2f6d4b1ec4fc5311f690" exitCode=0 Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.585209 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mpszx" event={"ID":"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4","Type":"ContainerDied","Data":"597c8ff5bbfa741fc36a77a13a481561fd9d1f9c1b4b2f6d4b1ec4fc5311f690"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.589061 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-26e2-account-create-update-lb8jh" event={"ID":"866bb191-d801-4191-b725-52648c9d38bf","Type":"ContainerStarted","Data":"dbeb009e175800373d048d66becbda38ceaa5b0de078d9eb7ef46ea812bb4f48"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.602335 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" podStartSLOduration=8.60231475 podStartE2EDuration="8.60231475s" podCreationTimestamp="2026-01-28 18:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:57:16.601016038 +0000 UTC m=+1402.326321598" watchObservedRunningTime="2026-01-28 18:57:16.60231475 +0000 UTC m=+1402.327620310" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.605786 4721 generic.go:334] "Generic (PLEG): container finished" podID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerID="7d3647343ea1bb010bb6f756bbe8c043bb3ef2a9dd83b66f0a3cedfcc37239cf" exitCode=143 Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.605851 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-649bf84c5b-p55hh" event={"ID":"65d3ed26-a43e-491f-8170-7d65eb15bd4f","Type":"ContainerDied","Data":"7d3647343ea1bb010bb6f756bbe8c043bb3ef2a9dd83b66f0a3cedfcc37239cf"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.609289 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xlhjz" event={"ID":"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9","Type":"ContainerStarted","Data":"34155a1134cb338c0cce6443e34dd2d6f34691df46c0b62211d1a871d7d4ba4f"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.622265 4721 generic.go:334] "Generic (PLEG): container finished" podID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerID="ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276" exitCode=0 Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.622332 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12589ff0-ab4c-4a16-b5bd-7cd433a85c86","Type":"ContainerDied","Data":"ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.622337 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.622358 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"12589ff0-ab4c-4a16-b5bd-7cd433a85c86","Type":"ContainerDied","Data":"99dbb7626917d81fe3ed180404d6a6b936b170f76b4096a2f20bf341c4279b78"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.622375 4721 scope.go:117] "RemoveContainer" containerID="ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.625854 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d","Type":"ContainerStarted","Data":"64341dba4bdd5285f974bd6135559d908bab405541b2719c74408927acc0b7df"} Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.673598 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-26e2-account-create-update-lb8jh" podStartSLOduration=9.673577381 podStartE2EDuration="9.673577381s" podCreationTimestamp="2026-01-28 18:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:57:16.670286257 +0000 UTC m=+1402.395591817" watchObservedRunningTime="2026-01-28 18:57:16.673577381 +0000 UTC m=+1402.398882941" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.720961 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-scripts\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721020 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-config-data\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721075 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-internal-tls-certs\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721119 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-combined-ca-bundle\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721148 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvnrp\" (UniqueName: \"kubernetes.io/projected/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-kube-api-access-lvnrp\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721412 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721507 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-logs\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.721554 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-httpd-run\") pod \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\" (UID: \"12589ff0-ab4c-4a16-b5bd-7cd433a85c86\") " Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.729384 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-scripts" (OuterVolumeSpecName: "scripts") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.733737 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-logs" (OuterVolumeSpecName: "logs") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.733985 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.755976 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-kube-api-access-lvnrp" (OuterVolumeSpecName: "kube-api-access-lvnrp") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "kube-api-access-lvnrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.796998 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.797952 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-xlhjz" podStartSLOduration=9.797936905 podStartE2EDuration="9.797936905s" podCreationTimestamp="2026-01-28 18:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:57:16.797375547 +0000 UTC m=+1402.522681107" watchObservedRunningTime="2026-01-28 18:57:16.797936905 +0000 UTC m=+1402.523242465" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.837402 4721 scope.go:117] "RemoveContainer" containerID="d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.850881 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.850914 4721 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.850923 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.850933 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.850943 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvnrp\" (UniqueName: \"kubernetes.io/projected/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-kube-api-access-lvnrp\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.859474 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.905203 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-config-data" (OuterVolumeSpecName: "config-data") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.909289 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9kc7"] Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.984059 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:16 crc kubenswrapper[4721]: I0128 18:57:16.984090 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12589ff0-ab4c-4a16-b5bd-7cd433a85c86-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.098533 4721 scope.go:117] "RemoveContainer" containerID="ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276" Jan 28 18:57:17 crc kubenswrapper[4721]: E0128 18:57:17.113565 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276\": container with ID starting with ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276 not found: ID does not exist" containerID="ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.113607 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276"} err="failed to get container status \"ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276\": rpc error: code = NotFound desc = could not find container \"ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276\": container with ID starting with ebcb2127a9e30e02fcc080615f038598b4bc3f9233adcf353ffe6173ec7b1276 not found: ID does not exist" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.113635 4721 scope.go:117] "RemoveContainer" containerID="d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741" Jan 28 18:57:17 crc kubenswrapper[4721]: E0128 18:57:17.114952 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741\": container with ID starting with d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741 not found: ID does not exist" containerID="d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.114980 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741"} err="failed to get container status \"d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741\": rpc error: code = NotFound desc = could not find container \"d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741\": container with ID starting with d0a3b0f5bafa310ef1c26b32ed945bc0cf2e5768f59b7604edbfc419aed0d741 not found: ID does not exist" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.150666 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2" (OuterVolumeSpecName: "glance") pod "12589ff0-ab4c-4a16-b5bd-7cd433a85c86" (UID: "12589ff0-ab4c-4a16-b5bd-7cd433a85c86"). InnerVolumeSpecName "pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.188737 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") on node \"crc\" " Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.366828 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.368412 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2") on node "crc" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.392980 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.477709 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.515285 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.564726 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" path="/var/lib/kubelet/pods/12589ff0-ab4c-4a16-b5bd-7cd433a85c86/volumes" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.565816 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:57:17 crc kubenswrapper[4721]: E0128 18:57:17.566305 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-httpd" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.566332 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-httpd" Jan 28 18:57:17 crc kubenswrapper[4721]: E0128 18:57:17.566374 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-log" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.566384 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-log" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.566671 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-log" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.566715 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="12589ff0-ab4c-4a16-b5bd-7cd433a85c86" containerName="glance-httpd" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.574480 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.574650 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.577651 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.577836 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.673142 4721 generic.go:334] "Generic (PLEG): container finished" podID="f49e85fc-9126-4151-980f-56517e1752c1" containerID="1f98430a5afea1fb88ac875610693f286a0dfea7a93072492ab2b95a8d1c1b91" exitCode=0 Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.673262 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" event={"ID":"f49e85fc-9126-4151-980f-56517e1752c1","Type":"ContainerDied","Data":"1f98430a5afea1fb88ac875610693f286a0dfea7a93072492ab2b95a8d1c1b91"} Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.681876 4721 generic.go:334] "Generic (PLEG): container finished" podID="fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" containerID="34155a1134cb338c0cce6443e34dd2d6f34691df46c0b62211d1a871d7d4ba4f" exitCode=0 Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.681975 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xlhjz" event={"ID":"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9","Type":"ContainerDied","Data":"34155a1134cb338c0cce6443e34dd2d6f34691df46c0b62211d1a871d7d4ba4f"} Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705021 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705121 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705196 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvgl9\" (UniqueName: \"kubernetes.io/projected/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-kube-api-access-zvgl9\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705263 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705305 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705391 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705424 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.705521 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.709237 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerStarted","Data":"86a65502734bbd30b3b720ee4a1f53fe7ba4aede62d8824fee48f7780134aacd"} Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807349 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807411 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807456 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvgl9\" (UniqueName: \"kubernetes.io/projected/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-kube-api-access-zvgl9\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807492 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807511 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807577 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807598 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.807654 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.808407 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.809502 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.818442 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.823076 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.823290 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.823662 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.827259 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.827297 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e0fb27f96ed2a0ff9a552b58e2db95cb7dc681ae95f2f3784ea1f011e1d9aaa2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.834159 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvgl9\" (UniqueName: \"kubernetes.io/projected/dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9-kube-api-access-zvgl9\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:17 crc kubenswrapper[4721]: I0128 18:57:17.912748 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37cf9bee-9d53-43de-b70c-dc3f1890f4f2\") pod \"glance-default-internal-api-0\" (UID: \"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.113658 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.144973 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.230301 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xds47\" (UniqueName: \"kubernetes.io/projected/0c518e64-69b5-4360-a219-407693412130-kube-api-access-xds47\") pod \"0c518e64-69b5-4360-a219-407693412130\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.230367 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c518e64-69b5-4360-a219-407693412130-operator-scripts\") pod \"0c518e64-69b5-4360-a219-407693412130\" (UID: \"0c518e64-69b5-4360-a219-407693412130\") " Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.232987 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c518e64-69b5-4360-a219-407693412130-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c518e64-69b5-4360-a219-407693412130" (UID: "0c518e64-69b5-4360-a219-407693412130"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.248811 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c518e64-69b5-4360-a219-407693412130-kube-api-access-xds47" (OuterVolumeSpecName: "kube-api-access-xds47") pod "0c518e64-69b5-4360-a219-407693412130" (UID: "0c518e64-69b5-4360-a219-407693412130"). InnerVolumeSpecName "kube-api-access-xds47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.333256 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xds47\" (UniqueName: \"kubernetes.io/projected/0c518e64-69b5-4360-a219-407693412130-kube-api-access-xds47\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.333287 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c518e64-69b5-4360-a219-407693412130-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.394509 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.444068 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.538920 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-operator-scripts\") pod \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.539100 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnqvd\" (UniqueName: \"kubernetes.io/projected/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-kube-api-access-tnqvd\") pod \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\" (UID: \"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4\") " Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.539268 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2jtx\" (UniqueName: \"kubernetes.io/projected/9fe76996-48bb-4656-8ce3-ac8098700636-kube-api-access-q2jtx\") pod \"9fe76996-48bb-4656-8ce3-ac8098700636\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.539535 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe76996-48bb-4656-8ce3-ac8098700636-operator-scripts\") pod \"9fe76996-48bb-4656-8ce3-ac8098700636\" (UID: \"9fe76996-48bb-4656-8ce3-ac8098700636\") " Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.541410 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fe76996-48bb-4656-8ce3-ac8098700636-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fe76996-48bb-4656-8ce3-ac8098700636" (UID: "9fe76996-48bb-4656-8ce3-ac8098700636"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.541824 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" (UID: "1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.550896 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe76996-48bb-4656-8ce3-ac8098700636-kube-api-access-q2jtx" (OuterVolumeSpecName: "kube-api-access-q2jtx") pod "9fe76996-48bb-4656-8ce3-ac8098700636" (UID: "9fe76996-48bb-4656-8ce3-ac8098700636"). InnerVolumeSpecName "kube-api-access-q2jtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.552266 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-kube-api-access-tnqvd" (OuterVolumeSpecName: "kube-api-access-tnqvd") pod "1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" (UID: "1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4"). InnerVolumeSpecName "kube-api-access-tnqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.643757 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.643791 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnqvd\" (UniqueName: \"kubernetes.io/projected/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4-kube-api-access-tnqvd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.643803 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2jtx\" (UniqueName: \"kubernetes.io/projected/9fe76996-48bb-4656-8ce3-ac8098700636-kube-api-access-q2jtx\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.643814 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe76996-48bb-4656-8ce3-ac8098700636-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.727799 4721 generic.go:334] "Generic (PLEG): container finished" podID="866bb191-d801-4191-b725-52648c9d38bf" containerID="dbeb009e175800373d048d66becbda38ceaa5b0de078d9eb7ef46ea812bb4f48" exitCode=0 Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.727838 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-26e2-account-create-update-lb8jh" event={"ID":"866bb191-d801-4191-b725-52648c9d38bf","Type":"ContainerDied","Data":"dbeb009e175800373d048d66becbda38ceaa5b0de078d9eb7ef46ea812bb4f48"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.739478 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d","Type":"ContainerStarted","Data":"2d5845074e72981c60ccaa6f2aaa3c0101e590a565245ed74805aef614ef6e7f"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.748212 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerStarted","Data":"b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.764017 4721 generic.go:334] "Generic (PLEG): container finished" podID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerID="871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720" exitCode=0 Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.764128 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerDied","Data":"871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.780521 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6mx9s" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.780619 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6mx9s" event={"ID":"0c518e64-69b5-4360-a219-407693412130","Type":"ContainerDied","Data":"e679b98bc5c5dd6d7e28aaf64fbb69db8ad07bcaa98a171267a47820942ef36f"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.780662 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e679b98bc5c5dd6d7e28aaf64fbb69db8ad07bcaa98a171267a47820942ef36f" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.804350 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" event={"ID":"9fe76996-48bb-4656-8ce3-ac8098700636","Type":"ContainerDied","Data":"995799b57b11d7d6046c825418d9b2558d65ccdf5b10d6f487181ed3b90cd635"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.804386 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995799b57b11d7d6046c825418d9b2558d65ccdf5b10d6f487181ed3b90cd635" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.804442 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4d-account-create-update-8nt6r" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.823587 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mpszx" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.823628 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mpszx" event={"ID":"1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4","Type":"ContainerDied","Data":"bd7625ab8d8562d57fde5332b878bb21f2d2a56eec4d0465a4db58ae71e3ddd1"} Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.823660 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7625ab8d8562d57fde5332b878bb21f2d2a56eec4d0465a4db58ae71e3ddd1" Jan 28 18:57:18 crc kubenswrapper[4721]: I0128 18:57:18.936840 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.477820 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.483262 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.538008 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-operator-scripts\") pod \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.538537 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f49e85fc-9126-4151-980f-56517e1752c1-operator-scripts\") pod \"f49e85fc-9126-4151-980f-56517e1752c1\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.538808 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pht9d\" (UniqueName: \"kubernetes.io/projected/f49e85fc-9126-4151-980f-56517e1752c1-kube-api-access-pht9d\") pod \"f49e85fc-9126-4151-980f-56517e1752c1\" (UID: \"f49e85fc-9126-4151-980f-56517e1752c1\") " Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.539027 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clsl\" (UniqueName: \"kubernetes.io/projected/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-kube-api-access-5clsl\") pod \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\" (UID: \"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9\") " Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.542218 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f49e85fc-9126-4151-980f-56517e1752c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f49e85fc-9126-4151-980f-56517e1752c1" (UID: "f49e85fc-9126-4151-980f-56517e1752c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.543622 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" (UID: "fd0e7c7c-c624-4b67-ae51-1a40265dfeb9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.550344 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.550397 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f49e85fc-9126-4151-980f-56517e1752c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.554529 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-kube-api-access-5clsl" (OuterVolumeSpecName: "kube-api-access-5clsl") pod "fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" (UID: "fd0e7c7c-c624-4b67-ae51-1a40265dfeb9"). InnerVolumeSpecName "kube-api-access-5clsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.554727 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49e85fc-9126-4151-980f-56517e1752c1-kube-api-access-pht9d" (OuterVolumeSpecName: "kube-api-access-pht9d") pod "f49e85fc-9126-4151-980f-56517e1752c1" (UID: "f49e85fc-9126-4151-980f-56517e1752c1"). InnerVolumeSpecName "kube-api-access-pht9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:19 crc kubenswrapper[4721]: E0128 18:57:19.601448 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.652561 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5clsl\" (UniqueName: \"kubernetes.io/projected/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9-kube-api-access-5clsl\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.652855 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pht9d\" (UniqueName: \"kubernetes.io/projected/f49e85fc-9126-4151-980f-56517e1752c1-kube-api-access-pht9d\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.909483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d","Type":"ContainerStarted","Data":"5d74178094c369de148f827696925de5eb286bfdc2e52373ac9b0c3a1a41c427"} Jan 28 18:57:19 crc kubenswrapper[4721]: I0128 18:57:19.971900 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.971873394 podStartE2EDuration="5.971873394s" podCreationTimestamp="2026-01-28 18:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:57:19.954069197 +0000 UTC m=+1405.679374757" watchObservedRunningTime="2026-01-28 18:57:19.971873394 +0000 UTC m=+1405.697178954" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.000358 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.000358 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6060-account-create-update-6nn4d" event={"ID":"f49e85fc-9126-4151-980f-56517e1752c1","Type":"ContainerDied","Data":"85eb8ef4f2ddc6533d5ce0d7f926fcb630c8c6eb078353ec573ff7646faf65b2"} Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.001817 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85eb8ef4f2ddc6533d5ce0d7f926fcb630c8c6eb078353ec573ff7646faf65b2" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.016128 4721 generic.go:334] "Generic (PLEG): container finished" podID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerID="19fac1308ae337004fdf3cfda1dfe901ebfa56b69b065d1dc73b4ebce61bd354" exitCode=0 Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.016276 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-649bf84c5b-p55hh" event={"ID":"65d3ed26-a43e-491f-8170-7d65eb15bd4f","Type":"ContainerDied","Data":"19fac1308ae337004fdf3cfda1dfe901ebfa56b69b065d1dc73b4ebce61bd354"} Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.026479 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9","Type":"ContainerStarted","Data":"48c99905a25001fc81556211293337d3b3fd2da5b4551835437f373823522ab5"} Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.038830 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xlhjz" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.040105 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xlhjz" event={"ID":"fd0e7c7c-c624-4b67-ae51-1a40265dfeb9","Type":"ContainerDied","Data":"24fbb430019f3a00685e2b6dd0a15e9732969e4dacfee6f89ecbe0b3c79609f0"} Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.040140 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fbb430019f3a00685e2b6dd0a15e9732969e4dacfee6f89ecbe0b3c79609f0" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.203106 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395148 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-public-tls-certs\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395265 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-config-data\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395384 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-internal-tls-certs\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395463 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-scripts\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395501 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65d3ed26-a43e-491f-8170-7d65eb15bd4f-logs\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395525 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tbt6\" (UniqueName: \"kubernetes.io/projected/65d3ed26-a43e-491f-8170-7d65eb15bd4f-kube-api-access-2tbt6\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.395553 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-combined-ca-bundle\") pod \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\" (UID: \"65d3ed26-a43e-491f-8170-7d65eb15bd4f\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.401619 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d3ed26-a43e-491f-8170-7d65eb15bd4f-logs" (OuterVolumeSpecName: "logs") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.415766 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-scripts" (OuterVolumeSpecName: "scripts") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.436490 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d3ed26-a43e-491f-8170-7d65eb15bd4f-kube-api-access-2tbt6" (OuterVolumeSpecName: "kube-api-access-2tbt6") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "kube-api-access-2tbt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.485544 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.502846 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.502883 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65d3ed26-a43e-491f-8170-7d65eb15bd4f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.502895 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tbt6\" (UniqueName: \"kubernetes.io/projected/65d3ed26-a43e-491f-8170-7d65eb15bd4f-kube-api-access-2tbt6\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.502909 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.575150 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-config-data" (OuterVolumeSpecName: "config-data") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.604798 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.611012 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.657302 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "65d3ed26-a43e-491f-8170-7d65eb15bd4f" (UID: "65d3ed26-a43e-491f-8170-7d65eb15bd4f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.708415 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.708825 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.708855 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3ed26-a43e-491f-8170-7d65eb15bd4f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.809786 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866bb191-d801-4191-b725-52648c9d38bf-operator-scripts\") pod \"866bb191-d801-4191-b725-52648c9d38bf\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.809938 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cr8\" (UniqueName: \"kubernetes.io/projected/866bb191-d801-4191-b725-52648c9d38bf-kube-api-access-w2cr8\") pod \"866bb191-d801-4191-b725-52648c9d38bf\" (UID: \"866bb191-d801-4191-b725-52648c9d38bf\") " Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.810996 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/866bb191-d801-4191-b725-52648c9d38bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "866bb191-d801-4191-b725-52648c9d38bf" (UID: "866bb191-d801-4191-b725-52648c9d38bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.821500 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866bb191-d801-4191-b725-52648c9d38bf-kube-api-access-w2cr8" (OuterVolumeSpecName: "kube-api-access-w2cr8") pod "866bb191-d801-4191-b725-52648c9d38bf" (UID: "866bb191-d801-4191-b725-52648c9d38bf"). InnerVolumeSpecName "kube-api-access-w2cr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.914137 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cr8\" (UniqueName: \"kubernetes.io/projected/866bb191-d801-4191-b725-52648c9d38bf-kube-api-access-w2cr8\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:20 crc kubenswrapper[4721]: I0128 18:57:20.914198 4721 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866bb191-d801-4191-b725-52648c9d38bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.058716 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerStarted","Data":"9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146"} Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.070969 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-26e2-account-create-update-lb8jh" event={"ID":"866bb191-d801-4191-b725-52648c9d38bf","Type":"ContainerDied","Data":"9462b735a95b7a04595eaad9061723f6c923ad72124798d342e7333e2aabfbd9"} Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.071011 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9462b735a95b7a04595eaad9061723f6c923ad72124798d342e7333e2aabfbd9" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.071066 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-26e2-account-create-update-lb8jh" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.077007 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-649bf84c5b-p55hh" event={"ID":"65d3ed26-a43e-491f-8170-7d65eb15bd4f","Type":"ContainerDied","Data":"de1a9183d43d58f63ac17658d2b3b7ef878e28497594abd669a6ca25ce0afbec"} Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.077080 4721 scope.go:117] "RemoveContainer" containerID="19fac1308ae337004fdf3cfda1dfe901ebfa56b69b065d1dc73b4ebce61bd354" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.077284 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-649bf84c5b-p55hh" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.101883 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9","Type":"ContainerStarted","Data":"b8f0f305d2d1132f1461016e3c74743498ca63c1b833ff8f2603ac912e745d7c"} Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.122263 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerStarted","Data":"031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3"} Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.122538 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-central-agent" containerID="cri-o://ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27" gracePeriod=30 Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.122733 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="proxy-httpd" containerID="cri-o://031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3" gracePeriod=30 Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.122788 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="sg-core" containerID="cri-o://b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7" gracePeriod=30 Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.122826 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-notification-agent" containerID="cri-o://225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28" gracePeriod=30 Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.177102 4721 scope.go:117] "RemoveContainer" containerID="7d3647343ea1bb010bb6f756bbe8c043bb3ef2a9dd83b66f0a3cedfcc37239cf" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.221133 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-649bf84c5b-p55hh"] Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.255472 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-649bf84c5b-p55hh"] Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.263157 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=10.22249042 podStartE2EDuration="17.263121739s" podCreationTimestamp="2026-01-28 18:57:04 +0000 UTC" firstStartedPulling="2026-01-28 18:57:12.946242124 +0000 UTC m=+1398.671547684" lastFinishedPulling="2026-01-28 18:57:19.986873443 +0000 UTC m=+1405.712179003" observedRunningTime="2026-01-28 18:57:21.21012869 +0000 UTC m=+1406.935434250" watchObservedRunningTime="2026-01-28 18:57:21.263121739 +0000 UTC m=+1406.988427299" Jan 28 18:57:21 crc kubenswrapper[4721]: I0128 18:57:21.543709 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" path="/var/lib/kubelet/pods/65d3ed26-a43e-491f-8170-7d65eb15bd4f/volumes" Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.137619 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9","Type":"ContainerStarted","Data":"f8f0cca91ea172864f6df0b01dfb77146e273297d0d02ba73f3d2e107f00dd20"} Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.141126 4721 generic.go:334] "Generic (PLEG): container finished" podID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerID="031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3" exitCode=0 Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.141191 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerDied","Data":"031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3"} Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.141222 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerDied","Data":"b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7"} Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.141240 4721 generic.go:334] "Generic (PLEG): container finished" podID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerID="b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7" exitCode=2 Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.141255 4721 generic.go:334] "Generic (PLEG): container finished" podID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerID="225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28" exitCode=0 Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.141300 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerDied","Data":"225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28"} Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.144711 4721 generic.go:334] "Generic (PLEG): container finished" podID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerID="9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146" exitCode=0 Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.144749 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerDied","Data":"9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146"} Jan 28 18:57:22 crc kubenswrapper[4721]: I0128 18:57:22.170366 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.170339064 podStartE2EDuration="5.170339064s" podCreationTimestamp="2026-01-28 18:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:57:22.167215654 +0000 UTC m=+1407.892521214" watchObservedRunningTime="2026-01-28 18:57:22.170339064 +0000 UTC m=+1407.895644624" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.158611 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerStarted","Data":"5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d"} Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.190313 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b9kc7" podStartSLOduration=4.431111568 podStartE2EDuration="8.190288752s" podCreationTimestamp="2026-01-28 18:57:15 +0000 UTC" firstStartedPulling="2026-01-28 18:57:18.773400956 +0000 UTC m=+1404.498706516" lastFinishedPulling="2026-01-28 18:57:22.53257814 +0000 UTC m=+1408.257883700" observedRunningTime="2026-01-28 18:57:23.179508979 +0000 UTC m=+1408.904814559" watchObservedRunningTime="2026-01-28 18:57:23.190288752 +0000 UTC m=+1408.915594312" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725319 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfg9f"] Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725818 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-api" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725835 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-api" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725848 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725855 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725867 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725874 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725885 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe76996-48bb-4656-8ce3-ac8098700636" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725891 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe76996-48bb-4656-8ce3-ac8098700636" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725913 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49e85fc-9126-4151-980f-56517e1752c1" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725920 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49e85fc-9126-4151-980f-56517e1752c1" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725930 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866bb191-d801-4191-b725-52648c9d38bf" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725937 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="866bb191-d801-4191-b725-52648c9d38bf" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725956 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-log" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725963 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-log" Jan 28 18:57:23 crc kubenswrapper[4721]: E0128 18:57:23.725975 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c518e64-69b5-4360-a219-407693412130" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.725981 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c518e64-69b5-4360-a219-407693412130" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726183 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-log" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726200 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d3ed26-a43e-491f-8170-7d65eb15bd4f" containerName="placement-api" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726209 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe76996-48bb-4656-8ce3-ac8098700636" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726216 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726224 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49e85fc-9126-4151-980f-56517e1752c1" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726234 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726243 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="866bb191-d801-4191-b725-52648c9d38bf" containerName="mariadb-account-create-update" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.726255 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c518e64-69b5-4360-a219-407693412130" containerName="mariadb-database-create" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.727076 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.730367 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.730619 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.730777 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tg5qd" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.759332 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfg9f"] Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.889968 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczzz\" (UniqueName: \"kubernetes.io/projected/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-kube-api-access-tczzz\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.890445 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-scripts\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.890811 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-config-data\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.890892 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.993491 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-config-data\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.993940 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.994268 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tczzz\" (UniqueName: \"kubernetes.io/projected/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-kube-api-access-tczzz\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:23 crc kubenswrapper[4721]: I0128 18:57:23.994623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-scripts\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:24 crc kubenswrapper[4721]: I0128 18:57:24.010931 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-scripts\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:24 crc kubenswrapper[4721]: I0128 18:57:24.011028 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-config-data\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:24 crc kubenswrapper[4721]: I0128 18:57:24.011086 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:24 crc kubenswrapper[4721]: I0128 18:57:24.017980 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tczzz\" (UniqueName: \"kubernetes.io/projected/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-kube-api-access-tczzz\") pod \"nova-cell0-conductor-db-sync-lfg9f\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:24 crc kubenswrapper[4721]: I0128 18:57:24.055224 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:57:24 crc kubenswrapper[4721]: I0128 18:57:24.668116 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfg9f"] Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.205856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" event={"ID":"d055f2af-0a9e-4a1e-af6b-b15c0287fc72","Type":"ContainerStarted","Data":"90cb41b0f49bd7d9f09b81f6e3f14555290ecc34eddb5d6ba20f2507f5e951f7"} Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.433285 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.433640 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.484790 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.489947 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.805326 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:25 crc kubenswrapper[4721]: I0128 18:57:25.805370 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.159107 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.242999 4721 generic.go:334] "Generic (PLEG): container finished" podID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerID="ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27" exitCode=0 Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.243088 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerDied","Data":"ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27"} Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.243225 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.243244 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323","Type":"ContainerDied","Data":"f0a0a87a711843203613f9029b2657e0104287587a77dcda939d8838fef0f727"} Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.243306 4721 scope.go:117] "RemoveContainer" containerID="031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.243134 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.243692 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.274900 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-scripts\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.275046 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-sg-core-conf-yaml\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.275069 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-log-httpd\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.275180 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-988jt\" (UniqueName: \"kubernetes.io/projected/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-kube-api-access-988jt\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.275229 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-config-data\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.275259 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-run-httpd\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.275358 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-combined-ca-bundle\") pod \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\" (UID: \"807b9edb-ebc0-4d76-87fb-e4c6ff5cd323\") " Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.277211 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.279270 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.283459 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-scripts" (OuterVolumeSpecName: "scripts") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.287386 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-kube-api-access-988jt" (OuterVolumeSpecName: "kube-api-access-988jt") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "kube-api-access-988jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.322196 4721 scope.go:117] "RemoveContainer" containerID="b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.360730 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.381593 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.381630 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.381644 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-988jt\" (UniqueName: \"kubernetes.io/projected/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-kube-api-access-988jt\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.381657 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.381669 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.478954 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-config-data" (OuterVolumeSpecName: "config-data") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.483612 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.499368 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" (UID: "807b9edb-ebc0-4d76-87fb-e4c6ff5cd323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.585450 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.615369 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.629666 4721 scope.go:117] "RemoveContainer" containerID="225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.630458 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.657444 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.657919 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="sg-core" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.657937 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="sg-core" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.657953 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-central-agent" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.657961 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-central-agent" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.657980 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="proxy-httpd" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.657987 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="proxy-httpd" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.658026 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-notification-agent" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.658033 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-notification-agent" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.666628 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="sg-core" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.666674 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-notification-agent" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.666685 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="proxy-httpd" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.666715 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" containerName="ceilometer-central-agent" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.668835 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.681432 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.681868 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.754260 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.793865 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw4ch\" (UniqueName: \"kubernetes.io/projected/883eb3c5-eb62-45e4-b9af-39b9c6437de6-kube-api-access-pw4ch\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.793921 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-run-httpd\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.793943 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.793974 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-scripts\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.793994 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-config-data\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.794030 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.794141 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-log-httpd\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.826370 4721 scope.go:117] "RemoveContainer" containerID="ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899309 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-scripts\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899360 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-config-data\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899405 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899538 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-log-httpd\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899572 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw4ch\" (UniqueName: \"kubernetes.io/projected/883eb3c5-eb62-45e4-b9af-39b9c6437de6-kube-api-access-pw4ch\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899599 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-run-httpd\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.899620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.900898 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-log-httpd\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.901141 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-run-httpd\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.909153 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-scripts\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.922422 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9kc7" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" probeResult="failure" output=< Jan 28 18:57:26 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:57:26 crc kubenswrapper[4721]: > Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.937325 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-config-data\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.955014 4721 scope.go:117] "RemoveContainer" containerID="031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.956348 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3\": container with ID starting with 031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3 not found: ID does not exist" containerID="031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.956508 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3"} err="failed to get container status \"031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3\": rpc error: code = NotFound desc = could not find container \"031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3\": container with ID starting with 031eb5150e801aa3267dfb4e52165cc5e7cebea34c0f5c7ef89c8c8b6c353bd3 not found: ID does not exist" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.956657 4721 scope.go:117] "RemoveContainer" containerID="b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.957124 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.965615 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7\": container with ID starting with b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7 not found: ID does not exist" containerID="b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.965945 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7"} err="failed to get container status \"b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7\": rpc error: code = NotFound desc = could not find container \"b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7\": container with ID starting with b4352c0b82d89472410fc6c745d60f82b8d94b435ee064e53eae51e38c01ebb7 not found: ID does not exist" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.966062 4721 scope.go:117] "RemoveContainer" containerID="225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.966644 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28\": container with ID starting with 225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28 not found: ID does not exist" containerID="225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.966692 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28"} err="failed to get container status \"225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28\": rpc error: code = NotFound desc = could not find container \"225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28\": container with ID starting with 225c8bd88bf22af97ab093fc7386fcf8c76a85d9483520062bfd7941d2676b28 not found: ID does not exist" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.966720 4721 scope.go:117] "RemoveContainer" containerID="ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.966880 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw4ch\" (UniqueName: \"kubernetes.io/projected/883eb3c5-eb62-45e4-b9af-39b9c6437de6-kube-api-access-pw4ch\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:26 crc kubenswrapper[4721]: E0128 18:57:26.967194 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27\": container with ID starting with ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27 not found: ID does not exist" containerID="ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.967572 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27"} err="failed to get container status \"ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27\": rpc error: code = NotFound desc = could not find container \"ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27\": container with ID starting with ba278f41d159e0ca302e23e9cd67578366ac8d2c7e81c0df34e0472d037a4c27 not found: ID does not exist" Jan 28 18:57:26 crc kubenswrapper[4721]: I0128 18:57:26.968904 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " pod="openstack/ceilometer-0" Jan 28 18:57:27 crc kubenswrapper[4721]: I0128 18:57:27.051791 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:27 crc kubenswrapper[4721]: I0128 18:57:27.337612 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 28 18:57:27 crc kubenswrapper[4721]: I0128 18:57:27.556421 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807b9edb-ebc0-4d76-87fb-e4c6ff5cd323" path="/var/lib/kubelet/pods/807b9edb-ebc0-4d76-87fb-e4c6ff5cd323/volumes" Jan 28 18:57:27 crc kubenswrapper[4721]: I0128 18:57:27.848315 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.116388 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.116457 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.218878 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.223838 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.300952 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerStarted","Data":"3d71efecbb8b4872856bcc51592a53e12e8d4612bb26244c5be6557506166774"} Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.301334 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:28 crc kubenswrapper[4721]: I0128 18:57:28.301972 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:29 crc kubenswrapper[4721]: E0128 18:57:29.963702 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:57:30 crc kubenswrapper[4721]: I0128 18:57:30.338989 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:57:30 crc kubenswrapper[4721]: I0128 18:57:30.340292 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:57:32 crc kubenswrapper[4721]: I0128 18:57:32.384805 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerStarted","Data":"ba86daddbf9212ba4eefad7b20d6c0a8f7a0e515f3f58489b7e1081fa31488db"} Jan 28 18:57:33 crc kubenswrapper[4721]: I0128 18:57:33.483393 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:57:33 crc kubenswrapper[4721]: I0128 18:57:33.483849 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:57:33 crc kubenswrapper[4721]: I0128 18:57:33.512953 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:33 crc kubenswrapper[4721]: I0128 18:57:33.513058 4721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:57:33 crc kubenswrapper[4721]: I0128 18:57:33.514508 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:57:33 crc kubenswrapper[4721]: I0128 18:57:33.516046 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:57:36 crc kubenswrapper[4721]: I0128 18:57:36.882950 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9kc7" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" probeResult="failure" output=< Jan 28 18:57:36 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:57:36 crc kubenswrapper[4721]: > Jan 28 18:57:39 crc kubenswrapper[4721]: I0128 18:57:39.630067 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:40 crc kubenswrapper[4721]: E0128 18:57:40.256291 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:57:41 crc kubenswrapper[4721]: I0128 18:57:41.492766 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerStarted","Data":"cf3af854a057280755f8f50473923bcc398bd0c2e3a2f2ae7d130032b8e6021e"} Jan 28 18:57:41 crc kubenswrapper[4721]: I0128 18:57:41.495137 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" event={"ID":"d055f2af-0a9e-4a1e-af6b-b15c0287fc72","Type":"ContainerStarted","Data":"e9f5996bd09b4c3e2461f29607268b2a47c043cb6c31391e226db5a728ba00ec"} Jan 28 18:57:41 crc kubenswrapper[4721]: I0128 18:57:41.526972 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" podStartSLOduration=2.350015396 podStartE2EDuration="18.526943418s" podCreationTimestamp="2026-01-28 18:57:23 +0000 UTC" firstStartedPulling="2026-01-28 18:57:24.680497457 +0000 UTC m=+1410.405803017" lastFinishedPulling="2026-01-28 18:57:40.857425479 +0000 UTC m=+1426.582731039" observedRunningTime="2026-01-28 18:57:41.509852503 +0000 UTC m=+1427.235158073" watchObservedRunningTime="2026-01-28 18:57:41.526943418 +0000 UTC m=+1427.252248978" Jan 28 18:57:44 crc kubenswrapper[4721]: I0128 18:57:44.540734 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerStarted","Data":"c1e4bf775f744f29326bc1c66ecb3c023abc9a7555836d3c36f7d3f7e0f47f46"} Jan 28 18:57:46 crc kubenswrapper[4721]: I0128 18:57:46.851318 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9kc7" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" probeResult="failure" output=< Jan 28 18:57:46 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 18:57:46 crc kubenswrapper[4721]: > Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.583942 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerStarted","Data":"444a6b9edefc80cc0e60bfb85ef86d6347d9b1e85862578cb307b58bf3a4a22c"} Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.584248 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-central-agent" containerID="cri-o://ba86daddbf9212ba4eefad7b20d6c0a8f7a0e515f3f58489b7e1081fa31488db" gracePeriod=30 Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.584346 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="proxy-httpd" containerID="cri-o://444a6b9edefc80cc0e60bfb85ef86d6347d9b1e85862578cb307b58bf3a4a22c" gracePeriod=30 Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.584378 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-notification-agent" containerID="cri-o://cf3af854a057280755f8f50473923bcc398bd0c2e3a2f2ae7d130032b8e6021e" gracePeriod=30 Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.584378 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="sg-core" containerID="cri-o://c1e4bf775f744f29326bc1c66ecb3c023abc9a7555836d3c36f7d3f7e0f47f46" gracePeriod=30 Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.584492 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:57:48 crc kubenswrapper[4721]: I0128 18:57:48.620055 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5187379 podStartE2EDuration="22.620025899s" podCreationTimestamp="2026-01-28 18:57:26 +0000 UTC" firstStartedPulling="2026-01-28 18:57:27.907438827 +0000 UTC m=+1413.632744387" lastFinishedPulling="2026-01-28 18:57:48.008726826 +0000 UTC m=+1433.734032386" observedRunningTime="2026-01-28 18:57:48.611821847 +0000 UTC m=+1434.337127427" watchObservedRunningTime="2026-01-28 18:57:48.620025899 +0000 UTC m=+1434.345331459" Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598560 4721 generic.go:334] "Generic (PLEG): container finished" podID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerID="444a6b9edefc80cc0e60bfb85ef86d6347d9b1e85862578cb307b58bf3a4a22c" exitCode=0 Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598869 4721 generic.go:334] "Generic (PLEG): container finished" podID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerID="c1e4bf775f744f29326bc1c66ecb3c023abc9a7555836d3c36f7d3f7e0f47f46" exitCode=2 Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598880 4721 generic.go:334] "Generic (PLEG): container finished" podID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerID="cf3af854a057280755f8f50473923bcc398bd0c2e3a2f2ae7d130032b8e6021e" exitCode=0 Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598887 4721 generic.go:334] "Generic (PLEG): container finished" podID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerID="ba86daddbf9212ba4eefad7b20d6c0a8f7a0e515f3f58489b7e1081fa31488db" exitCode=0 Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598640 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerDied","Data":"444a6b9edefc80cc0e60bfb85ef86d6347d9b1e85862578cb307b58bf3a4a22c"} Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598927 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerDied","Data":"c1e4bf775f744f29326bc1c66ecb3c023abc9a7555836d3c36f7d3f7e0f47f46"} Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598943 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerDied","Data":"cf3af854a057280755f8f50473923bcc398bd0c2e3a2f2ae7d130032b8e6021e"} Jan 28 18:57:49 crc kubenswrapper[4721]: I0128 18:57:49.598953 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerDied","Data":"ba86daddbf9212ba4eefad7b20d6c0a8f7a0e515f3f58489b7e1081fa31488db"} Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.174595 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.244550 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-run-httpd\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.244709 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-combined-ca-bundle\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.244846 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw4ch\" (UniqueName: \"kubernetes.io/projected/883eb3c5-eb62-45e4-b9af-39b9c6437de6-kube-api-access-pw4ch\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.245004 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.245801 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-sg-core-conf-yaml\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.245905 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-log-httpd\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.246052 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-config-data\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.246132 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-scripts\") pod \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\" (UID: \"883eb3c5-eb62-45e4-b9af-39b9c6437de6\") " Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.246492 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.247198 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.247226 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883eb3c5-eb62-45e4-b9af-39b9c6437de6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.250861 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-scripts" (OuterVolumeSpecName: "scripts") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.251295 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883eb3c5-eb62-45e4-b9af-39b9c6437de6-kube-api-access-pw4ch" (OuterVolumeSpecName: "kube-api-access-pw4ch") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "kube-api-access-pw4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.282417 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.329242 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.349234 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw4ch\" (UniqueName: \"kubernetes.io/projected/883eb3c5-eb62-45e4-b9af-39b9c6437de6-kube-api-access-pw4ch\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.349262 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.349271 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.349285 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.363612 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-config-data" (OuterVolumeSpecName: "config-data") pod "883eb3c5-eb62-45e4-b9af-39b9c6437de6" (UID: "883eb3c5-eb62-45e4-b9af-39b9c6437de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.451874 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883eb3c5-eb62-45e4-b9af-39b9c6437de6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:50 crc kubenswrapper[4721]: E0128 18:57:50.540781 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2f69be_cd3d_44ef_80af_f0d4ac766305.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.615957 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"883eb3c5-eb62-45e4-b9af-39b9c6437de6","Type":"ContainerDied","Data":"3d71efecbb8b4872856bcc51592a53e12e8d4612bb26244c5be6557506166774"} Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.616104 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.616950 4721 scope.go:117] "RemoveContainer" containerID="444a6b9edefc80cc0e60bfb85ef86d6347d9b1e85862578cb307b58bf3a4a22c" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.651455 4721 scope.go:117] "RemoveContainer" containerID="c1e4bf775f744f29326bc1c66ecb3c023abc9a7555836d3c36f7d3f7e0f47f46" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.665855 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.687295 4721 scope.go:117] "RemoveContainer" containerID="cf3af854a057280755f8f50473923bcc398bd0c2e3a2f2ae7d130032b8e6021e" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.719580 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.727260 4721 scope.go:117] "RemoveContainer" containerID="ba86daddbf9212ba4eefad7b20d6c0a8f7a0e515f3f58489b7e1081fa31488db" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.744453 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:50 crc kubenswrapper[4721]: E0128 18:57:50.745608 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-notification-agent" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.745633 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-notification-agent" Jan 28 18:57:50 crc kubenswrapper[4721]: E0128 18:57:50.745660 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="sg-core" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.745668 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="sg-core" Jan 28 18:57:50 crc kubenswrapper[4721]: E0128 18:57:50.745678 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="proxy-httpd" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.745684 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="proxy-httpd" Jan 28 18:57:50 crc kubenswrapper[4721]: E0128 18:57:50.745728 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-central-agent" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.745768 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-central-agent" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.746361 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-notification-agent" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.746396 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="ceilometer-central-agent" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.746420 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="sg-core" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.746435 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" containerName="proxy-httpd" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.778614 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.778858 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.782255 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.782507 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863154 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-run-httpd\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863233 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863324 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863371 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p78f2\" (UniqueName: \"kubernetes.io/projected/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-kube-api-access-p78f2\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863429 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-scripts\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863471 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-config-data\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.863504 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-log-httpd\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.965812 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-scripts\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.966209 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-config-data\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.966362 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-log-httpd\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.966542 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-run-httpd\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.966675 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.966856 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.966954 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-run-httpd\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.967050 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-log-httpd\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.967215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p78f2\" (UniqueName: \"kubernetes.io/projected/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-kube-api-access-p78f2\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.970740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.971243 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-scripts\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.971608 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.973782 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-config-data\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:50 crc kubenswrapper[4721]: I0128 18:57:50.991658 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p78f2\" (UniqueName: \"kubernetes.io/projected/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-kube-api-access-p78f2\") pod \"ceilometer-0\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " pod="openstack/ceilometer-0" Jan 28 18:57:51 crc kubenswrapper[4721]: I0128 18:57:51.107814 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:57:51 crc kubenswrapper[4721]: I0128 18:57:51.542001 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883eb3c5-eb62-45e4-b9af-39b9c6437de6" path="/var/lib/kubelet/pods/883eb3c5-eb62-45e4-b9af-39b9c6437de6/volumes" Jan 28 18:57:51 crc kubenswrapper[4721]: I0128 18:57:51.625008 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:57:52 crc kubenswrapper[4721]: I0128 18:57:52.646617 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerStarted","Data":"b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7"} Jan 28 18:57:52 crc kubenswrapper[4721]: I0128 18:57:52.647252 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerStarted","Data":"3d1ed8fa10b9c7ca20ffa273ea3cb972c4876721bae475d5e2d55851c8121da8"} Jan 28 18:57:53 crc kubenswrapper[4721]: I0128 18:57:53.660067 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerStarted","Data":"4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809"} Jan 28 18:57:54 crc kubenswrapper[4721]: I0128 18:57:54.676519 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerStarted","Data":"be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42"} Jan 28 18:57:55 crc kubenswrapper[4721]: I0128 18:57:55.861972 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:55 crc kubenswrapper[4721]: I0128 18:57:55.931806 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:56 crc kubenswrapper[4721]: I0128 18:57:56.549074 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9kc7"] Jan 28 18:57:57 crc kubenswrapper[4721]: I0128 18:57:57.747982 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerStarted","Data":"cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c"} Jan 28 18:57:57 crc kubenswrapper[4721]: I0128 18:57:57.748205 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b9kc7" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" containerID="cri-o://5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d" gracePeriod=2 Jan 28 18:57:57 crc kubenswrapper[4721]: I0128 18:57:57.784896 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.591846799 podStartE2EDuration="7.784862511s" podCreationTimestamp="2026-01-28 18:57:50 +0000 UTC" firstStartedPulling="2026-01-28 18:57:51.648836323 +0000 UTC m=+1437.374141883" lastFinishedPulling="2026-01-28 18:57:56.841852035 +0000 UTC m=+1442.567157595" observedRunningTime="2026-01-28 18:57:57.781847845 +0000 UTC m=+1443.507153415" watchObservedRunningTime="2026-01-28 18:57:57.784862511 +0000 UTC m=+1443.510168071" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.445097 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.564475 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44k8k\" (UniqueName: \"kubernetes.io/projected/653fc683-6178-446d-9cf2-4ae9e3e0029e-kube-api-access-44k8k\") pod \"653fc683-6178-446d-9cf2-4ae9e3e0029e\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.564565 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-catalog-content\") pod \"653fc683-6178-446d-9cf2-4ae9e3e0029e\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.564719 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-utilities\") pod \"653fc683-6178-446d-9cf2-4ae9e3e0029e\" (UID: \"653fc683-6178-446d-9cf2-4ae9e3e0029e\") " Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.565769 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-utilities" (OuterVolumeSpecName: "utilities") pod "653fc683-6178-446d-9cf2-4ae9e3e0029e" (UID: "653fc683-6178-446d-9cf2-4ae9e3e0029e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.570466 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653fc683-6178-446d-9cf2-4ae9e3e0029e-kube-api-access-44k8k" (OuterVolumeSpecName: "kube-api-access-44k8k") pod "653fc683-6178-446d-9cf2-4ae9e3e0029e" (UID: "653fc683-6178-446d-9cf2-4ae9e3e0029e"). InnerVolumeSpecName "kube-api-access-44k8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.667288 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44k8k\" (UniqueName: \"kubernetes.io/projected/653fc683-6178-446d-9cf2-4ae9e3e0029e-kube-api-access-44k8k\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.667614 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.694926 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "653fc683-6178-446d-9cf2-4ae9e3e0029e" (UID: "653fc683-6178-446d-9cf2-4ae9e3e0029e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.761878 4721 generic.go:334] "Generic (PLEG): container finished" podID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerID="5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d" exitCode=0 Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.762004 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9kc7" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.762007 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerDied","Data":"5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d"} Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.763112 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9kc7" event={"ID":"653fc683-6178-446d-9cf2-4ae9e3e0029e","Type":"ContainerDied","Data":"86a65502734bbd30b3b720ee4a1f53fe7ba4aede62d8824fee48f7780134aacd"} Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.763159 4721 scope.go:117] "RemoveContainer" containerID="5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.763943 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.769858 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/653fc683-6178-446d-9cf2-4ae9e3e0029e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.799466 4721 scope.go:117] "RemoveContainer" containerID="9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.806867 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9kc7"] Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.819459 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b9kc7"] Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.829123 4721 scope.go:117] "RemoveContainer" containerID="871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.875688 4721 scope.go:117] "RemoveContainer" containerID="5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d" Jan 28 18:57:58 crc kubenswrapper[4721]: E0128 18:57:58.876389 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d\": container with ID starting with 5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d not found: ID does not exist" containerID="5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.876450 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d"} err="failed to get container status \"5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d\": rpc error: code = NotFound desc = could not find container \"5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d\": container with ID starting with 5cf2e95afca5566c0c889ae9550e0bc1cbeca0804fe3e6e64d6f5f8d6133594d not found: ID does not exist" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.876512 4721 scope.go:117] "RemoveContainer" containerID="9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146" Jan 28 18:57:58 crc kubenswrapper[4721]: E0128 18:57:58.877225 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146\": container with ID starting with 9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146 not found: ID does not exist" containerID="9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.877282 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146"} err="failed to get container status \"9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146\": rpc error: code = NotFound desc = could not find container \"9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146\": container with ID starting with 9890688ebc3ee921a428803bbd6dcad304daf1868a81b11d4a6efd30763fb146 not found: ID does not exist" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.877315 4721 scope.go:117] "RemoveContainer" containerID="871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720" Jan 28 18:57:58 crc kubenswrapper[4721]: E0128 18:57:58.878082 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720\": container with ID starting with 871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720 not found: ID does not exist" containerID="871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720" Jan 28 18:57:58 crc kubenswrapper[4721]: I0128 18:57:58.878128 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720"} err="failed to get container status \"871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720\": rpc error: code = NotFound desc = could not find container \"871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720\": container with ID starting with 871264cd7070bf5cb51271c73c163eb594fe9be2bc6f57700783c97bf1e87720 not found: ID does not exist" Jan 28 18:57:59 crc kubenswrapper[4721]: I0128 18:57:59.542741 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" path="/var/lib/kubelet/pods/653fc683-6178-446d-9cf2-4ae9e3e0029e/volumes" Jan 28 18:58:00 crc kubenswrapper[4721]: I0128 18:58:00.093933 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:00 crc kubenswrapper[4721]: I0128 18:58:00.796026 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-central-agent" containerID="cri-o://b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7" gracePeriod=30 Jan 28 18:58:00 crc kubenswrapper[4721]: I0128 18:58:00.796906 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="proxy-httpd" containerID="cri-o://cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c" gracePeriod=30 Jan 28 18:58:00 crc kubenswrapper[4721]: I0128 18:58:00.796988 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-notification-agent" containerID="cri-o://4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809" gracePeriod=30 Jan 28 18:58:00 crc kubenswrapper[4721]: I0128 18:58:00.797429 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="sg-core" containerID="cri-o://be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42" gracePeriod=30 Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.224642 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.224718 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.822892 4721 generic.go:334] "Generic (PLEG): container finished" podID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerID="cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c" exitCode=0 Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.823337 4721 generic.go:334] "Generic (PLEG): container finished" podID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerID="be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42" exitCode=2 Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.823355 4721 generic.go:334] "Generic (PLEG): container finished" podID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerID="4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809" exitCode=0 Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.822990 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerDied","Data":"cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c"} Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.823409 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerDied","Data":"be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42"} Jan 28 18:58:01 crc kubenswrapper[4721]: I0128 18:58:01.823429 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerDied","Data":"4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809"} Jan 28 18:58:02 crc kubenswrapper[4721]: I0128 18:58:02.836414 4721 generic.go:334] "Generic (PLEG): container finished" podID="d055f2af-0a9e-4a1e-af6b-b15c0287fc72" containerID="e9f5996bd09b4c3e2461f29607268b2a47c043cb6c31391e226db5a728ba00ec" exitCode=0 Jan 28 18:58:02 crc kubenswrapper[4721]: I0128 18:58:02.836520 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" event={"ID":"d055f2af-0a9e-4a1e-af6b-b15c0287fc72","Type":"ContainerDied","Data":"e9f5996bd09b4c3e2461f29607268b2a47c043cb6c31391e226db5a728ba00ec"} Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.500680 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.598828 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-log-httpd\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.598894 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-config-data\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599063 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-scripts\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599124 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p78f2\" (UniqueName: \"kubernetes.io/projected/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-kube-api-access-p78f2\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599193 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-combined-ca-bundle\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599327 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-run-httpd\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599362 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-sg-core-conf-yaml\") pod \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\" (UID: \"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21\") " Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599392 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.599871 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.600986 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.601016 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.610509 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-kube-api-access-p78f2" (OuterVolumeSpecName: "kube-api-access-p78f2") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "kube-api-access-p78f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.617642 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-scripts" (OuterVolumeSpecName: "scripts") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.637049 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.703204 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.703258 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p78f2\" (UniqueName: \"kubernetes.io/projected/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-kube-api-access-p78f2\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.703276 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.709769 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.765444 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-config-data" (OuterVolumeSpecName: "config-data") pod "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" (UID: "f8b0f370-ca44-4fcb-bed3-63f4d45dcd21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.805617 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.805661 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.851450 4721 generic.go:334] "Generic (PLEG): container finished" podID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerID="b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7" exitCode=0 Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.851523 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.851569 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerDied","Data":"b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7"} Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.851607 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8b0f370-ca44-4fcb-bed3-63f4d45dcd21","Type":"ContainerDied","Data":"3d1ed8fa10b9c7ca20ffa273ea3cb972c4876721bae475d5e2d55851c8121da8"} Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.851626 4721 scope.go:117] "RemoveContainer" containerID="cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.918475 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.920345 4721 scope.go:117] "RemoveContainer" containerID="be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.934749 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965149 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.965794 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="proxy-httpd" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965820 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="proxy-httpd" Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.965849 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="extract-content" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965857 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="extract-content" Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.965875 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="sg-core" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965883 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="sg-core" Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.965901 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-notification-agent" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965910 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-notification-agent" Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.965950 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="extract-utilities" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965960 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="extract-utilities" Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.965981 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-central-agent" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.965989 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-central-agent" Jan 28 18:58:03 crc kubenswrapper[4721]: E0128 18:58:03.966009 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.966017 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.966279 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="sg-core" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.966301 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="653fc683-6178-446d-9cf2-4ae9e3e0029e" containerName="registry-server" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.966318 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-central-agent" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.966334 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="ceilometer-notification-agent" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.966354 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" containerName="proxy-httpd" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.968898 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.971678 4721 scope.go:117] "RemoveContainer" containerID="4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.972500 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.978325 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:58:03 crc kubenswrapper[4721]: I0128 18:58:03.987728 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011448 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-run-httpd\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-config-data\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011578 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011630 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011665 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-log-httpd\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011699 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-scripts\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.011730 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8zh\" (UniqueName: \"kubernetes.io/projected/bd88c773-2665-43ab-a9b4-e0f740fda3c7-kube-api-access-9x8zh\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.076631 4721 scope.go:117] "RemoveContainer" containerID="b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.114468 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-log-httpd\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.114550 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-scripts\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.114596 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8zh\" (UniqueName: \"kubernetes.io/projected/bd88c773-2665-43ab-a9b4-e0f740fda3c7-kube-api-access-9x8zh\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.114634 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-run-httpd\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.115018 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-config-data\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.115085 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.115137 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.119774 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-run-httpd\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.120073 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-log-httpd\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.120704 4721 scope.go:117] "RemoveContainer" containerID="cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c" Jan 28 18:58:04 crc kubenswrapper[4721]: E0128 18:58:04.126044 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c\": container with ID starting with cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c not found: ID does not exist" containerID="cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.126108 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c"} err="failed to get container status \"cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c\": rpc error: code = NotFound desc = could not find container \"cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c\": container with ID starting with cccb61a874157bc40e5b92f4c098c727f3b5b8630409a9d679306d696cb1df8c not found: ID does not exist" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.126148 4721 scope.go:117] "RemoveContainer" containerID="be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42" Jan 28 18:58:04 crc kubenswrapper[4721]: E0128 18:58:04.127639 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42\": container with ID starting with be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42 not found: ID does not exist" containerID="be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.127676 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42"} err="failed to get container status \"be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42\": rpc error: code = NotFound desc = could not find container \"be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42\": container with ID starting with be812b814d31196564bdf751eed2821a2040ddc225244b4426a5cd7d15164c42 not found: ID does not exist" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.127702 4721 scope.go:117] "RemoveContainer" containerID="4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809" Jan 28 18:58:04 crc kubenswrapper[4721]: E0128 18:58:04.149820 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809\": container with ID starting with 4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809 not found: ID does not exist" containerID="4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.149888 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809"} err="failed to get container status \"4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809\": rpc error: code = NotFound desc = could not find container \"4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809\": container with ID starting with 4dcc8344383e75462e745d7e290edf92cf03128fc5048679c7605c7a83a88809 not found: ID does not exist" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.149927 4721 scope.go:117] "RemoveContainer" containerID="b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7" Jan 28 18:58:04 crc kubenswrapper[4721]: E0128 18:58:04.150412 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7\": container with ID starting with b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7 not found: ID does not exist" containerID="b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.150447 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7"} err="failed to get container status \"b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7\": rpc error: code = NotFound desc = could not find container \"b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7\": container with ID starting with b2754e6a6378675e073300aa4db0ba5469f87d571247341c61288e7c2bb506e7 not found: ID does not exist" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.185501 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.186623 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-scripts\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.189690 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-config-data\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.192542 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.210571 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8zh\" (UniqueName: \"kubernetes.io/projected/bd88c773-2665-43ab-a9b4-e0f740fda3c7-kube-api-access-9x8zh\") pod \"ceilometer-0\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.321305 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.430404 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.626958 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-scripts\") pod \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.627336 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-config-data\") pod \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.627361 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-combined-ca-bundle\") pod \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.628218 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tczzz\" (UniqueName: \"kubernetes.io/projected/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-kube-api-access-tczzz\") pod \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\" (UID: \"d055f2af-0a9e-4a1e-af6b-b15c0287fc72\") " Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.633347 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-scripts" (OuterVolumeSpecName: "scripts") pod "d055f2af-0a9e-4a1e-af6b-b15c0287fc72" (UID: "d055f2af-0a9e-4a1e-af6b-b15c0287fc72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.633604 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-kube-api-access-tczzz" (OuterVolumeSpecName: "kube-api-access-tczzz") pod "d055f2af-0a9e-4a1e-af6b-b15c0287fc72" (UID: "d055f2af-0a9e-4a1e-af6b-b15c0287fc72"). InnerVolumeSpecName "kube-api-access-tczzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.658566 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-config-data" (OuterVolumeSpecName: "config-data") pod "d055f2af-0a9e-4a1e-af6b-b15c0287fc72" (UID: "d055f2af-0a9e-4a1e-af6b-b15c0287fc72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.658590 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d055f2af-0a9e-4a1e-af6b-b15c0287fc72" (UID: "d055f2af-0a9e-4a1e-af6b-b15c0287fc72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.731058 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.731103 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.731115 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.731126 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tczzz\" (UniqueName: \"kubernetes.io/projected/d055f2af-0a9e-4a1e-af6b-b15c0287fc72-kube-api-access-tczzz\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:04 crc kubenswrapper[4721]: W0128 18:58:04.831379 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd88c773_2665_43ab_a9b4_e0f740fda3c7.slice/crio-ebdcd3c14f4bb81668a28d39c6f76cce862921042a7abb344db29bbf7de1b012 WatchSource:0}: Error finding container ebdcd3c14f4bb81668a28d39c6f76cce862921042a7abb344db29bbf7de1b012: Status 404 returned error can't find the container with id ebdcd3c14f4bb81668a28d39c6f76cce862921042a7abb344db29bbf7de1b012 Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.834948 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.865381 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" event={"ID":"d055f2af-0a9e-4a1e-af6b-b15c0287fc72","Type":"ContainerDied","Data":"90cb41b0f49bd7d9f09b81f6e3f14555290ecc34eddb5d6ba20f2507f5e951f7"} Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.865429 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cb41b0f49bd7d9f09b81f6e3f14555290ecc34eddb5d6ba20f2507f5e951f7" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.865494 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lfg9f" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.871450 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerStarted","Data":"ebdcd3c14f4bb81668a28d39c6f76cce862921042a7abb344db29bbf7de1b012"} Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.973695 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:04 crc kubenswrapper[4721]: E0128 18:58:04.974322 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d055f2af-0a9e-4a1e-af6b-b15c0287fc72" containerName="nova-cell0-conductor-db-sync" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.974344 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d055f2af-0a9e-4a1e-af6b-b15c0287fc72" containerName="nova-cell0-conductor-db-sync" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.974630 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d055f2af-0a9e-4a1e-af6b-b15c0287fc72" containerName="nova-cell0-conductor-db-sync" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.975741 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.979844 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tg5qd" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.983613 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:58:04 crc kubenswrapper[4721]: I0128 18:58:04.985574 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.037681 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.038416 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwl45\" (UniqueName: \"kubernetes.io/projected/083e541d-08f4-4dce-985e-341d17008dd4-kube-api-access-xwl45\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.038513 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.140939 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwl45\" (UniqueName: \"kubernetes.io/projected/083e541d-08f4-4dce-985e-341d17008dd4-kube-api-access-xwl45\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.141040 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.141104 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.148349 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.148472 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.162952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwl45\" (UniqueName: \"kubernetes.io/projected/083e541d-08f4-4dce-985e-341d17008dd4-kube-api-access-xwl45\") pod \"nova-cell0-conductor-0\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.308707 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.542496 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b0f370-ca44-4fcb-bed3-63f4d45dcd21" path="/var/lib/kubelet/pods/f8b0f370-ca44-4fcb-bed3-63f4d45dcd21/volumes" Jan 28 18:58:05 crc kubenswrapper[4721]: W0128 18:58:05.789391 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod083e541d_08f4_4dce_985e_341d17008dd4.slice/crio-f4c61f308e2fd31785b53874608d2f8532e7a52a08a381f1c5bc42c3502aebe6 WatchSource:0}: Error finding container f4c61f308e2fd31785b53874608d2f8532e7a52a08a381f1c5bc42c3502aebe6: Status 404 returned error can't find the container with id f4c61f308e2fd31785b53874608d2f8532e7a52a08a381f1c5bc42c3502aebe6 Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.789847 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.886527 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"083e541d-08f4-4dce-985e-341d17008dd4","Type":"ContainerStarted","Data":"f4c61f308e2fd31785b53874608d2f8532e7a52a08a381f1c5bc42c3502aebe6"} Jan 28 18:58:05 crc kubenswrapper[4721]: I0128 18:58:05.888498 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerStarted","Data":"96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e"} Jan 28 18:58:06 crc kubenswrapper[4721]: I0128 18:58:06.903983 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"083e541d-08f4-4dce-985e-341d17008dd4","Type":"ContainerStarted","Data":"1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007"} Jan 28 18:58:06 crc kubenswrapper[4721]: I0128 18:58:06.904599 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:06 crc kubenswrapper[4721]: I0128 18:58:06.906930 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerStarted","Data":"16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43"} Jan 28 18:58:06 crc kubenswrapper[4721]: I0128 18:58:06.925380 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.925348457 podStartE2EDuration="2.925348457s" podCreationTimestamp="2026-01-28 18:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:06.921487303 +0000 UTC m=+1452.646792873" watchObservedRunningTime="2026-01-28 18:58:06.925348457 +0000 UTC m=+1452.650654017" Jan 28 18:58:07 crc kubenswrapper[4721]: I0128 18:58:07.936936 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerStarted","Data":"5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177"} Jan 28 18:58:10 crc kubenswrapper[4721]: I0128 18:58:10.970379 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerStarted","Data":"51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e"} Jan 28 18:58:10 crc kubenswrapper[4721]: I0128 18:58:10.971028 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:58:11 crc kubenswrapper[4721]: I0128 18:58:11.000467 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.022468271 podStartE2EDuration="8.000436018s" podCreationTimestamp="2026-01-28 18:58:03 +0000 UTC" firstStartedPulling="2026-01-28 18:58:04.834479607 +0000 UTC m=+1450.559785167" lastFinishedPulling="2026-01-28 18:58:09.812447354 +0000 UTC m=+1455.537752914" observedRunningTime="2026-01-28 18:58:10.990494431 +0000 UTC m=+1456.715800001" watchObservedRunningTime="2026-01-28 18:58:11.000436018 +0000 UTC m=+1456.725741578" Jan 28 18:58:15 crc kubenswrapper[4721]: I0128 18:58:15.338844 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:15 crc kubenswrapper[4721]: I0128 18:58:15.844722 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gnjqs"] Jan 28 18:58:15 crc kubenswrapper[4721]: I0128 18:58:15.846606 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:15 crc kubenswrapper[4721]: I0128 18:58:15.849324 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 18:58:15 crc kubenswrapper[4721]: I0128 18:58:15.849565 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 18:58:15 crc kubenswrapper[4721]: I0128 18:58:15.860520 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gnjqs"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.011257 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.011311 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-scripts\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.011385 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmjq\" (UniqueName: \"kubernetes.io/projected/1a0545b1-8866-4f13-b0a4-3425a39e103d-kube-api-access-qmmjq\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.011625 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-config-data\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.055961 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.058014 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.073665 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.105219 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.114849 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-config-data\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.114906 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-config-data\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.114968 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.115009 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj88l\" (UniqueName: \"kubernetes.io/projected/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-kube-api-access-zj88l\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.115058 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.115079 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-scripts\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.115110 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-logs\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.115130 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmmjq\" (UniqueName: \"kubernetes.io/projected/1a0545b1-8866-4f13-b0a4-3425a39e103d-kube-api-access-qmmjq\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.129383 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-scripts\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.133448 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-config-data\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.134713 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.144295 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.146263 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.154701 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.167782 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.194790 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmmjq\" (UniqueName: \"kubernetes.io/projected/1a0545b1-8866-4f13-b0a4-3425a39e103d-kube-api-access-qmmjq\") pod \"nova-cell0-cell-mapping-gnjqs\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.215897 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-config-data\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.215942 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.216009 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.216037 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-logs\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.216057 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj88l\" (UniqueName: \"kubernetes.io/projected/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-kube-api-access-zj88l\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.216080 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdw8z\" (UniqueName: \"kubernetes.io/projected/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-kube-api-access-vdw8z\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.216185 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-logs\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.216212 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-config-data\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.219291 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-logs\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.231991 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.232099 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.233780 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.238683 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.248853 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-config-data\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.310728 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj88l\" (UniqueName: \"kubernetes.io/projected/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-kube-api-access-zj88l\") pod \"nova-api-0\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.323808 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.324033 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-logs\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.324087 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdw8z\" (UniqueName: \"kubernetes.io/projected/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-kube-api-access-vdw8z\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.324205 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-config-data\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.329929 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-logs\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.339979 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.350379 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-config-data\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.373740 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdw8z\" (UniqueName: \"kubernetes.io/projected/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-kube-api-access-vdw8z\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.381693 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.393013 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.407265 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-9cwzn"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.412467 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.435916 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.435962 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9j58\" (UniqueName: \"kubernetes.io/projected/da88710a-992e-46e0-abe2-8b7c8390f54f-kube-api-access-n9j58\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.436094 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-config-data\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.450805 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.451320 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-9cwzn"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.464855 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.467235 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.482504 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.484773 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.502741 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.549962 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.550024 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9j58\" (UniqueName: \"kubernetes.io/projected/da88710a-992e-46e0-abe2-8b7c8390f54f-kube-api-access-n9j58\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.550109 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.550149 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.550264 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-svc\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.550362 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8m5h\" (UniqueName: \"kubernetes.io/projected/49c5ce5d-28b1-4b34-865e-7452b6512fa5-kube-api-access-v8m5h\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.551875 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-config-data\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.552532 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.552674 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-config\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.554119 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.564730 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-config-data\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.569154 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9j58\" (UniqueName: \"kubernetes.io/projected/da88710a-992e-46e0-abe2-8b7c8390f54f-kube-api-access-n9j58\") pod \"nova-scheduler-0\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.656689 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.656745 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.656788 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.656849 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.656877 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-svc\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.656945 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8m5h\" (UniqueName: \"kubernetes.io/projected/49c5ce5d-28b1-4b34-865e-7452b6512fa5-kube-api-access-v8m5h\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.657093 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.657203 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-config\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.657276 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/228ac5c0-6690-4954-837e-952891b36a1d-kube-api-access-hn96b\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.659123 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.659926 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.660633 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-svc\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.663050 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.663631 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-config\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.681447 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8m5h\" (UniqueName: \"kubernetes.io/projected/49c5ce5d-28b1-4b34-865e-7452b6512fa5-kube-api-access-v8m5h\") pod \"dnsmasq-dns-78cd565959-9cwzn\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.748814 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.765419 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.767497 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/228ac5c0-6690-4954-837e-952891b36a1d-kube-api-access-hn96b\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.767623 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.767704 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.776275 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.780896 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.809876 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/228ac5c0-6690-4954-837e-952891b36a1d-kube-api-access-hn96b\") pod \"nova-cell1-novncproxy-0\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:16 crc kubenswrapper[4721]: I0128 18:58:16.826710 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.163810 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.275414 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhzxr"] Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.278220 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.281335 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.281843 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.296266 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhzxr"] Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.355880 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.381714 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gnjqs"] Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.396248 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-config-data\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.396362 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s2q\" (UniqueName: \"kubernetes.io/projected/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-kube-api-access-z5s2q\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.396541 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-scripts\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.396607 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.514596 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-config-data\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.514665 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5s2q\" (UniqueName: \"kubernetes.io/projected/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-kube-api-access-z5s2q\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.514783 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-scripts\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.514826 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.519063 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.565877 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-scripts\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.593841 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5s2q\" (UniqueName: \"kubernetes.io/projected/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-kube-api-access-z5s2q\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.594452 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-config-data\") pod \"nova-cell1-conductor-db-sync-jhzxr\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:17 crc kubenswrapper[4721]: I0128 18:58:17.629468 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.050918 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.098405 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.172708 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd","Type":"ContainerStarted","Data":"0e9e92d12172f3cf5187440a5f3ba0236222ab572811800132fccc10917c880d"} Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.177283 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec7e3b7-65e9-4c19-9beb-438ac0303aab","Type":"ContainerStarted","Data":"e542ad4a764c25af4ba2b1b51cb7eab0157c0079342fab2e8bbba7fe1261a4ba"} Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.199374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gnjqs" event={"ID":"1a0545b1-8866-4f13-b0a4-3425a39e103d","Type":"ContainerStarted","Data":"e8bf92191342a10be8c3051651fc40be31879792cebc41e3a4c4b989e620235a"} Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.212631 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da88710a-992e-46e0-abe2-8b7c8390f54f","Type":"ContainerStarted","Data":"85e3c4229263c60d3af287f42a6c6210b43f08fecef72dc3e4ed48c5778435a4"} Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.215822 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-9cwzn"] Jan 28 18:58:18 crc kubenswrapper[4721]: I0128 18:58:18.662224 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhzxr"] Jan 28 18:58:18 crc kubenswrapper[4721]: W0128 18:58:18.686444 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa94acc5_9ec9_4129_ac88_db06e56fa5e1.slice/crio-75f264f2602339855514a95ed802d3f739f23396e590d138c2a898233eb547e5 WatchSource:0}: Error finding container 75f264f2602339855514a95ed802d3f739f23396e590d138c2a898233eb547e5: Status 404 returned error can't find the container with id 75f264f2602339855514a95ed802d3f739f23396e590d138c2a898233eb547e5 Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.244567 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228ac5c0-6690-4954-837e-952891b36a1d","Type":"ContainerStarted","Data":"298449db1feafb16db43b1584f66afd58efbbdfa0120a98ec056c73f592e4da7"} Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.261659 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gnjqs" event={"ID":"1a0545b1-8866-4f13-b0a4-3425a39e103d","Type":"ContainerStarted","Data":"c48c7d07a9d5bf6ea57ca99af75f3d29c355b924f6a3414c92fb6d5d564782ed"} Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.269747 4721 generic.go:334] "Generic (PLEG): container finished" podID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerID="2d5b90e6a30cc433594ec6b19e88fc4d298215cce89eb24e7cf852b538b363ae" exitCode=0 Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.269895 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" event={"ID":"49c5ce5d-28b1-4b34-865e-7452b6512fa5","Type":"ContainerDied","Data":"2d5b90e6a30cc433594ec6b19e88fc4d298215cce89eb24e7cf852b538b363ae"} Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.269928 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" event={"ID":"49c5ce5d-28b1-4b34-865e-7452b6512fa5","Type":"ContainerStarted","Data":"aed78d1fb1eb6bcd5560ec1fd826b3f7bf214fd436d25586df291762554013cf"} Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.286349 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" event={"ID":"fa94acc5-9ec9-4129-ac88-db06e56fa5e1","Type":"ContainerStarted","Data":"f37bc3c1b8fe009a164f59159e105ce9781f64e2db81f8802fe0c83ee99e7799"} Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.286401 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" event={"ID":"fa94acc5-9ec9-4129-ac88-db06e56fa5e1","Type":"ContainerStarted","Data":"75f264f2602339855514a95ed802d3f739f23396e590d138c2a898233eb547e5"} Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.318971 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gnjqs" podStartSLOduration=4.318943925 podStartE2EDuration="4.318943925s" podCreationTimestamp="2026-01-28 18:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:19.295621972 +0000 UTC m=+1465.020927532" watchObservedRunningTime="2026-01-28 18:58:19.318943925 +0000 UTC m=+1465.044249485" Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.375727 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" podStartSLOduration=2.375706364 podStartE2EDuration="2.375706364s" podCreationTimestamp="2026-01-28 18:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:19.354755457 +0000 UTC m=+1465.080061017" watchObservedRunningTime="2026-01-28 18:58:19.375706364 +0000 UTC m=+1465.101011924" Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.930278 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.945261 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.961313 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.973138 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.973558 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="083e541d-08f4-4dce-985e-341d17008dd4" containerName="nova-cell0-conductor-conductor" containerID="cri-o://1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007" gracePeriod=30 Jan 28 18:58:19 crc kubenswrapper[4721]: I0128 18:58:19.987567 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:20 crc kubenswrapper[4721]: E0128 18:58:20.311451 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:58:20 crc kubenswrapper[4721]: I0128 18:58:20.311742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" event={"ID":"49c5ce5d-28b1-4b34-865e-7452b6512fa5","Type":"ContainerStarted","Data":"a2ac82e74e2ec28298b95675c7d0747ddfe6755e7a6d80ee6c02a96d121876e0"} Jan 28 18:58:20 crc kubenswrapper[4721]: I0128 18:58:20.312227 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:20 crc kubenswrapper[4721]: E0128 18:58:20.315639 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:58:20 crc kubenswrapper[4721]: E0128 18:58:20.325056 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:58:20 crc kubenswrapper[4721]: E0128 18:58:20.325132 4721 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="083e541d-08f4-4dce-985e-341d17008dd4" containerName="nova-cell0-conductor-conductor" Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.475412 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" podStartSLOduration=6.475390477 podStartE2EDuration="6.475390477s" podCreationTimestamp="2026-01-28 18:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:20.343737038 +0000 UTC m=+1466.069042608" watchObservedRunningTime="2026-01-28 18:58:22.475390477 +0000 UTC m=+1468.200696037" Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.487708 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.488152 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="sg-core" containerID="cri-o://5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177" gracePeriod=30 Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.488257 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="proxy-httpd" containerID="cri-o://51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e" gracePeriod=30 Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.488356 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-notification-agent" containerID="cri-o://16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43" gracePeriod=30 Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.488663 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-central-agent" containerID="cri-o://96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e" gracePeriod=30 Jan 28 18:58:22 crc kubenswrapper[4721]: I0128 18:58:22.515212 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.359106 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228ac5c0-6690-4954-837e-952891b36a1d","Type":"ContainerStarted","Data":"59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.359654 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="228ac5c0-6690-4954-837e-952891b36a1d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438" gracePeriod=30 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.366107 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da88710a-992e-46e0-abe2-8b7c8390f54f","Type":"ContainerStarted","Data":"cee1bd39ce919d92de23bfdc6ec78393295deb10adae7e6898343d527cb44555"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.366350 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="da88710a-992e-46e0-abe2-8b7c8390f54f" containerName="nova-scheduler-scheduler" containerID="cri-o://cee1bd39ce919d92de23bfdc6ec78393295deb10adae7e6898343d527cb44555" gracePeriod=30 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.386777 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.58575325 podStartE2EDuration="7.386749005s" podCreationTimestamp="2026-01-28 18:58:16 +0000 UTC" firstStartedPulling="2026-01-28 18:58:18.182506435 +0000 UTC m=+1463.907811985" lastFinishedPulling="2026-01-28 18:58:21.98350217 +0000 UTC m=+1467.708807740" observedRunningTime="2026-01-28 18:58:23.378881014 +0000 UTC m=+1469.104186574" watchObservedRunningTime="2026-01-28 18:58:23.386749005 +0000 UTC m=+1469.112054565" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.389614 4721 generic.go:334] "Generic (PLEG): container finished" podID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerID="51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e" exitCode=0 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.389661 4721 generic.go:334] "Generic (PLEG): container finished" podID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerID="5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177" exitCode=2 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.389677 4721 generic.go:334] "Generic (PLEG): container finished" podID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerID="96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e" exitCode=0 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.389743 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerDied","Data":"51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.389782 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerDied","Data":"5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.389801 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerDied","Data":"96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.393417 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd","Type":"ContainerStarted","Data":"6b4dbccaf25d5bf0d5ebfadb46828a6b590f666ecb7d9403d579203ce2bdfa14"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.393461 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd","Type":"ContainerStarted","Data":"fff8b9d22337f0b2a2b88d3b6dd927185ac76f5b4e375c98d82f3c4e04c63ddd"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.393629 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-log" containerID="cri-o://fff8b9d22337f0b2a2b88d3b6dd927185ac76f5b4e375c98d82f3c4e04c63ddd" gracePeriod=30 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.393944 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-api" containerID="cri-o://6b4dbccaf25d5bf0d5ebfadb46828a6b590f666ecb7d9403d579203ce2bdfa14" gracePeriod=30 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.406724 4721 generic.go:334] "Generic (PLEG): container finished" podID="083e541d-08f4-4dce-985e-341d17008dd4" containerID="1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007" exitCode=0 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.406800 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"083e541d-08f4-4dce-985e-341d17008dd4","Type":"ContainerDied","Data":"1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.409513 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.521002906 podStartE2EDuration="7.40949438s" podCreationTimestamp="2026-01-28 18:58:16 +0000 UTC" firstStartedPulling="2026-01-28 18:58:18.09923824 +0000 UTC m=+1463.824543800" lastFinishedPulling="2026-01-28 18:58:21.987729714 +0000 UTC m=+1467.713035274" observedRunningTime="2026-01-28 18:58:23.397841798 +0000 UTC m=+1469.123147358" watchObservedRunningTime="2026-01-28 18:58:23.40949438 +0000 UTC m=+1469.134799940" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.414729 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec7e3b7-65e9-4c19-9beb-438ac0303aab","Type":"ContainerStarted","Data":"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.414785 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec7e3b7-65e9-4c19-9beb-438ac0303aab","Type":"ContainerStarted","Data":"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56"} Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.414953 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-log" containerID="cri-o://f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56" gracePeriod=30 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.415153 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-metadata" containerID="cri-o://7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896" gracePeriod=30 Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.424515 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.624765781 podStartE2EDuration="7.424482097s" podCreationTimestamp="2026-01-28 18:58:16 +0000 UTC" firstStartedPulling="2026-01-28 18:58:17.182362218 +0000 UTC m=+1462.907667778" lastFinishedPulling="2026-01-28 18:58:21.982078524 +0000 UTC m=+1467.707384094" observedRunningTime="2026-01-28 18:58:23.416194263 +0000 UTC m=+1469.141499843" watchObservedRunningTime="2026-01-28 18:58:23.424482097 +0000 UTC m=+1469.149787667" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.445420 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.821907645 podStartE2EDuration="7.445390534s" podCreationTimestamp="2026-01-28 18:58:16 +0000 UTC" firstStartedPulling="2026-01-28 18:58:17.359559836 +0000 UTC m=+1463.084865396" lastFinishedPulling="2026-01-28 18:58:21.983042725 +0000 UTC m=+1467.708348285" observedRunningTime="2026-01-28 18:58:23.439512306 +0000 UTC m=+1469.164817866" watchObservedRunningTime="2026-01-28 18:58:23.445390534 +0000 UTC m=+1469.170696094" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.885238 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.927068 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-config-data\") pod \"083e541d-08f4-4dce-985e-341d17008dd4\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.927339 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-combined-ca-bundle\") pod \"083e541d-08f4-4dce-985e-341d17008dd4\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.927452 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwl45\" (UniqueName: \"kubernetes.io/projected/083e541d-08f4-4dce-985e-341d17008dd4-kube-api-access-xwl45\") pod \"083e541d-08f4-4dce-985e-341d17008dd4\" (UID: \"083e541d-08f4-4dce-985e-341d17008dd4\") " Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.935260 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083e541d-08f4-4dce-985e-341d17008dd4-kube-api-access-xwl45" (OuterVolumeSpecName: "kube-api-access-xwl45") pod "083e541d-08f4-4dce-985e-341d17008dd4" (UID: "083e541d-08f4-4dce-985e-341d17008dd4"). InnerVolumeSpecName "kube-api-access-xwl45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:23 crc kubenswrapper[4721]: I0128 18:58:23.970506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-config-data" (OuterVolumeSpecName: "config-data") pod "083e541d-08f4-4dce-985e-341d17008dd4" (UID: "083e541d-08f4-4dce-985e-341d17008dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.029888 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwl45\" (UniqueName: \"kubernetes.io/projected/083e541d-08f4-4dce-985e-341d17008dd4-kube-api-access-xwl45\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.030400 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.034598 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "083e541d-08f4-4dce-985e-341d17008dd4" (UID: "083e541d-08f4-4dce-985e-341d17008dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.132325 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083e541d-08f4-4dce-985e-341d17008dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.410635 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.411553 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438038 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-run-httpd\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438115 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-combined-ca-bundle\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438208 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-sg-core-conf-yaml\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438258 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdw8z\" (UniqueName: \"kubernetes.io/projected/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-kube-api-access-vdw8z\") pod \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438869 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-config-data\") pod \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438906 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-config-data\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.438923 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-combined-ca-bundle\") pod \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.439119 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8zh\" (UniqueName: \"kubernetes.io/projected/bd88c773-2665-43ab-a9b4-e0f740fda3c7-kube-api-access-9x8zh\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.439121 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.439141 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-log-httpd\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.439693 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-logs\") pod \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\" (UID: \"5ec7e3b7-65e9-4c19-9beb-438ac0303aab\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.439803 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-scripts\") pod \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\" (UID: \"bd88c773-2665-43ab-a9b4-e0f740fda3c7\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.439693 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.440224 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-logs" (OuterVolumeSpecName: "logs") pod "5ec7e3b7-65e9-4c19-9beb-438ac0303aab" (UID: "5ec7e3b7-65e9-4c19-9beb-438ac0303aab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.440816 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.440835 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.440844 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd88c773-2665-43ab-a9b4-e0f740fda3c7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441361 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerID="7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896" exitCode=0 Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441403 4721 generic.go:334] "Generic (PLEG): container finished" podID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerID="f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56" exitCode=143 Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441458 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec7e3b7-65e9-4c19-9beb-438ac0303aab","Type":"ContainerDied","Data":"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441492 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec7e3b7-65e9-4c19-9beb-438ac0303aab","Type":"ContainerDied","Data":"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441503 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ec7e3b7-65e9-4c19-9beb-438ac0303aab","Type":"ContainerDied","Data":"e542ad4a764c25af4ba2b1b51cb7eab0157c0079342fab2e8bbba7fe1261a4ba"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441519 4721 scope.go:117] "RemoveContainer" containerID="7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.441674 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.517419 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-kube-api-access-vdw8z" (OuterVolumeSpecName: "kube-api-access-vdw8z") pod "5ec7e3b7-65e9-4c19-9beb-438ac0303aab" (UID: "5ec7e3b7-65e9-4c19-9beb-438ac0303aab"). InnerVolumeSpecName "kube-api-access-vdw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.517522 4721 generic.go:334] "Generic (PLEG): container finished" podID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerID="16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43" exitCode=0 Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.517604 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.517642 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerDied","Data":"16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.517671 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd88c773-2665-43ab-a9b4-e0f740fda3c7","Type":"ContainerDied","Data":"ebdcd3c14f4bb81668a28d39c6f76cce862921042a7abb344db29bbf7de1b012"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.532367 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-scripts" (OuterVolumeSpecName: "scripts") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.534412 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd88c773-2665-43ab-a9b4-e0f740fda3c7-kube-api-access-9x8zh" (OuterVolumeSpecName: "kube-api-access-9x8zh") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "kube-api-access-9x8zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.542932 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.542971 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdw8z\" (UniqueName: \"kubernetes.io/projected/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-kube-api-access-vdw8z\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.542981 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8zh\" (UniqueName: \"kubernetes.io/projected/bd88c773-2665-43ab-a9b4-e0f740fda3c7-kube-api-access-9x8zh\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.548941 4721 generic.go:334] "Generic (PLEG): container finished" podID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerID="6b4dbccaf25d5bf0d5ebfadb46828a6b590f666ecb7d9403d579203ce2bdfa14" exitCode=0 Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.548981 4721 generic.go:334] "Generic (PLEG): container finished" podID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerID="fff8b9d22337f0b2a2b88d3b6dd927185ac76f5b4e375c98d82f3c4e04c63ddd" exitCode=143 Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.549037 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd","Type":"ContainerDied","Data":"6b4dbccaf25d5bf0d5ebfadb46828a6b590f666ecb7d9403d579203ce2bdfa14"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.549071 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd","Type":"ContainerDied","Data":"fff8b9d22337f0b2a2b88d3b6dd927185ac76f5b4e375c98d82f3c4e04c63ddd"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.566856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"083e541d-08f4-4dce-985e-341d17008dd4","Type":"ContainerDied","Data":"f4c61f308e2fd31785b53874608d2f8532e7a52a08a381f1c5bc42c3502aebe6"} Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.566963 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.600432 4721 scope.go:117] "RemoveContainer" containerID="f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.732604 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.742109 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-config-data" (OuterVolumeSpecName: "config-data") pod "5ec7e3b7-65e9-4c19-9beb-438ac0303aab" (UID: "5ec7e3b7-65e9-4c19-9beb-438ac0303aab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.753481 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.758459 4721 scope.go:117] "RemoveContainer" containerID="7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.759233 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896\": container with ID starting with 7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896 not found: ID does not exist" containerID="7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.759291 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896"} err="failed to get container status \"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896\": rpc error: code = NotFound desc = could not find container \"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896\": container with ID starting with 7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896 not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.759331 4721 scope.go:117] "RemoveContainer" containerID="f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.760406 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56\": container with ID starting with f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56 not found: ID does not exist" containerID="f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.760452 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56"} err="failed to get container status \"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56\": rpc error: code = NotFound desc = could not find container \"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56\": container with ID starting with f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56 not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.760478 4721 scope.go:117] "RemoveContainer" containerID="7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.760994 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896"} err="failed to get container status \"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896\": rpc error: code = NotFound desc = could not find container \"7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896\": container with ID starting with 7fe3e4e1f563a9d55e3025b3489c7e3d2f059c276267ecf5d64ec23af5127896 not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.761019 4721 scope.go:117] "RemoveContainer" containerID="f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.761424 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56"} err="failed to get container status \"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56\": rpc error: code = NotFound desc = could not find container \"f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56\": container with ID starting with f2a5dadd8f5033a8d2ae429e090b5983e199172790eaa5cc6b66e40140d36b56 not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.761459 4721 scope.go:117] "RemoveContainer" containerID="51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.786079 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.790881 4721 scope.go:117] "RemoveContainer" containerID="5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.796708 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.848880 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ec7e3b7-65e9-4c19-9beb-438ac0303aab" (UID: "5ec7e3b7-65e9-4c19-9beb-438ac0303aab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.855399 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.855435 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec7e3b7-65e9-4c19-9beb-438ac0303aab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858216 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858744 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-notification-agent" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858762 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-notification-agent" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858778 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-metadata" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858785 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-metadata" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858802 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="proxy-httpd" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858808 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="proxy-httpd" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858829 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083e541d-08f4-4dce-985e-341d17008dd4" containerName="nova-cell0-conductor-conductor" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858835 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="083e541d-08f4-4dce-985e-341d17008dd4" containerName="nova-cell0-conductor-conductor" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858851 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-central-agent" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858858 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-central-agent" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858876 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="sg-core" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858882 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="sg-core" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.858894 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-log" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.858904 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-log" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859109 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-log" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859124 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="proxy-httpd" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859136 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-notification-agent" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859149 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="083e541d-08f4-4dce-985e-341d17008dd4" containerName="nova-cell0-conductor-conductor" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859164 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" containerName="nova-metadata-metadata" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859184 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="sg-core" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.859194 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" containerName="ceilometer-central-agent" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.860013 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.864593 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.885777 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.902185 4721 scope.go:117] "RemoveContainer" containerID="16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.909342 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.910693 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.931427 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-config-data" (OuterVolumeSpecName: "config-data") pod "bd88c773-2665-43ab-a9b4-e0f740fda3c7" (UID: "bd88c773-2665-43ab-a9b4-e0f740fda3c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.942234 4721 scope.go:117] "RemoveContainer" containerID="96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.957564 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-config-data\") pod \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.957654 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj88l\" (UniqueName: \"kubernetes.io/projected/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-kube-api-access-zj88l\") pod \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.957765 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-logs\") pod \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.957821 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-combined-ca-bundle\") pod \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\" (UID: \"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd\") " Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.958398 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqvww\" (UniqueName: \"kubernetes.io/projected/977400c1-f351-4271-b494-25c1bd6dd31f-kube-api-access-bqvww\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.958447 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977400c1-f351-4271-b494-25c1bd6dd31f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.958556 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977400c1-f351-4271-b494-25c1bd6dd31f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.958757 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.958775 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd88c773-2665-43ab-a9b4-e0f740fda3c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.959113 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-logs" (OuterVolumeSpecName: "logs") pod "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" (UID: "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.962271 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-kube-api-access-zj88l" (OuterVolumeSpecName: "kube-api-access-zj88l") pod "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" (UID: "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd"). InnerVolumeSpecName "kube-api-access-zj88l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.983188 4721 scope.go:117] "RemoveContainer" containerID="51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.985514 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e\": container with ID starting with 51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e not found: ID does not exist" containerID="51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.985768 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e"} err="failed to get container status \"51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e\": rpc error: code = NotFound desc = could not find container \"51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e\": container with ID starting with 51d2b16faa498da52e7da63f3733407f6b97819d405e5aa375a08a01b228f72e not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.985868 4721 scope.go:117] "RemoveContainer" containerID="5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.990469 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177\": container with ID starting with 5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177 not found: ID does not exist" containerID="5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.990541 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177"} err="failed to get container status \"5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177\": rpc error: code = NotFound desc = could not find container \"5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177\": container with ID starting with 5cd7d09468c06345a0ec72a8cb77bed2790c5125162182f605ebbd3e36b7b177 not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.990579 4721 scope.go:117] "RemoveContainer" containerID="16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.993292 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" (UID: "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:24 crc kubenswrapper[4721]: E0128 18:58:24.994610 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43\": container with ID starting with 16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43 not found: ID does not exist" containerID="16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.994666 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43"} err="failed to get container status \"16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43\": rpc error: code = NotFound desc = could not find container \"16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43\": container with ID starting with 16d9fa6893834f59dd9a671a85afa694bb7b58f37deacc02176d804c0ff30f43 not found: ID does not exist" Jan 28 18:58:24 crc kubenswrapper[4721]: I0128 18:58:24.994700 4721 scope.go:117] "RemoveContainer" containerID="96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e" Jan 28 18:58:25 crc kubenswrapper[4721]: E0128 18:58:25.005878 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e\": container with ID starting with 96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e not found: ID does not exist" containerID="96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.005945 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e"} err="failed to get container status \"96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e\": rpc error: code = NotFound desc = could not find container \"96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e\": container with ID starting with 96350fa578c10e8810ee06cdbeadc483339a26faf1bd54638d3de9ca6206c90e not found: ID does not exist" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.005980 4721 scope.go:117] "RemoveContainer" containerID="1965eb5aaec68f9c77fc69ae8ffbf7e22282387dd73b41fe9e2a44527bbc4007" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.012379 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-config-data" (OuterVolumeSpecName: "config-data") pod "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" (UID: "42e66a8e-cc43-4a51-8a08-029ecf2ec8dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062012 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqvww\" (UniqueName: \"kubernetes.io/projected/977400c1-f351-4271-b494-25c1bd6dd31f-kube-api-access-bqvww\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062208 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977400c1-f351-4271-b494-25c1bd6dd31f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062474 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977400c1-f351-4271-b494-25c1bd6dd31f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062798 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062817 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj88l\" (UniqueName: \"kubernetes.io/projected/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-kube-api-access-zj88l\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062833 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.062845 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.067291 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977400c1-f351-4271-b494-25c1bd6dd31f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.068947 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977400c1-f351-4271-b494-25c1bd6dd31f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.087630 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqvww\" (UniqueName: \"kubernetes.io/projected/977400c1-f351-4271-b494-25c1bd6dd31f-kube-api-access-bqvww\") pod \"nova-cell0-conductor-0\" (UID: \"977400c1-f351-4271-b494-25c1bd6dd31f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.191340 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.212763 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.231727 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.259221 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.270091 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: E0128 18:58:25.271099 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-log" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.271239 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-log" Jan 28 18:58:25 crc kubenswrapper[4721]: E0128 18:58:25.271334 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-api" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.271439 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-api" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.271789 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-log" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.271888 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" containerName="nova-api-api" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.324039 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.324741 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.329618 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.332655 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.381013 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpt6\" (UniqueName: \"kubernetes.io/projected/4f529a56-ccec-4eed-9a56-094d3ada74a3-kube-api-access-xdpt6\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.402722 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.402989 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.403128 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-run-httpd\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.403163 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-scripts\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.403315 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-config-data\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.403390 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-log-httpd\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.431270 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.444793 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.446673 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.449969 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.450021 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.459961 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-log-httpd\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507509 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507655 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdpt6\" (UniqueName: \"kubernetes.io/projected/4f529a56-ccec-4eed-9a56-094d3ada74a3-kube-api-access-xdpt6\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507694 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507715 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-logs\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507779 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5crl\" (UniqueName: \"kubernetes.io/projected/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-kube-api-access-g5crl\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507842 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507910 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-run-httpd\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507950 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-scripts\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.507982 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-config-data\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.508006 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-log-httpd\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.508046 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.508117 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-config-data\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.512436 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-run-httpd\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.515220 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-config-data\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.515241 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-scripts\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.516901 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.518620 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.530707 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdpt6\" (UniqueName: \"kubernetes.io/projected/4f529a56-ccec-4eed-9a56-094d3ada74a3-kube-api-access-xdpt6\") pod \"ceilometer-0\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.551346 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083e541d-08f4-4dce-985e-341d17008dd4" path="/var/lib/kubelet/pods/083e541d-08f4-4dce-985e-341d17008dd4/volumes" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.551951 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec7e3b7-65e9-4c19-9beb-438ac0303aab" path="/var/lib/kubelet/pods/5ec7e3b7-65e9-4c19-9beb-438ac0303aab/volumes" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.552569 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd88c773-2665-43ab-a9b4-e0f740fda3c7" path="/var/lib/kubelet/pods/bd88c773-2665-43ab-a9b4-e0f740fda3c7/volumes" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.606268 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.606343 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42e66a8e-cc43-4a51-8a08-029ecf2ec8dd","Type":"ContainerDied","Data":"0e9e92d12172f3cf5187440a5f3ba0236222ab572811800132fccc10917c880d"} Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.606418 4721 scope.go:117] "RemoveContainer" containerID="6b4dbccaf25d5bf0d5ebfadb46828a6b590f666ecb7d9403d579203ce2bdfa14" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.610556 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5crl\" (UniqueName: \"kubernetes.io/projected/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-kube-api-access-g5crl\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.610812 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-config-data\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.610903 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.611089 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.612878 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-logs\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.613356 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-logs\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.618025 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.634978 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.635580 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5crl\" (UniqueName: \"kubernetes.io/projected/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-kube-api-access-g5crl\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.635681 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-config-data\") pod \"nova-metadata-0\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.647574 4721 scope.go:117] "RemoveContainer" containerID="fff8b9d22337f0b2a2b88d3b6dd927185ac76f5b4e375c98d82f3c4e04c63ddd" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.663255 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.694233 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.700746 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.711269 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.715440 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.715490 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.739487 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.772819 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.840384 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-config-data\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.840426 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j29g6\" (UniqueName: \"kubernetes.io/projected/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-kube-api-access-j29g6\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.840500 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-logs\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.840538 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.856616 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.943709 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-config-data\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.944980 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j29g6\" (UniqueName: \"kubernetes.io/projected/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-kube-api-access-j29g6\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.945137 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-logs\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.945270 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.954831 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.958661 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-logs\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:25 crc kubenswrapper[4721]: I0128 18:58:25.997027 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-config-data\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.021918 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j29g6\" (UniqueName: \"kubernetes.io/projected/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-kube-api-access-j29g6\") pod \"nova-api-0\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " pod="openstack/nova-api-0" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.117301 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.599296 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.633011 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"977400c1-f351-4271-b494-25c1bd6dd31f","Type":"ContainerStarted","Data":"2cbbb8b3b0eb6ce319faaa7325b4424a2dbf88d4a06ed0bd04eba2ce338e16ee"} Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.633384 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"977400c1-f351-4271-b494-25c1bd6dd31f","Type":"ContainerStarted","Data":"db72cee4a444484fa33627cb4de8f9680d32f0e0236e4e7b277034d9615a2d6d"} Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.634036 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.635304 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerStarted","Data":"0e8abc0859bacd8d4380311bca72de563baae8b8cbe87b10144b896384888ab2"} Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.663910 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.663886873 podStartE2EDuration="2.663886873s" podCreationTimestamp="2026-01-28 18:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:26.65060627 +0000 UTC m=+1472.375911830" watchObservedRunningTime="2026-01-28 18:58:26.663886873 +0000 UTC m=+1472.389192433" Jan 28 18:58:26 crc kubenswrapper[4721]: W0128 18:58:26.722058 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11f3cb78_241f_4c92_8d2e_0bca68c3a7f7.slice/crio-a6c922533a6b5e7efba95549240d6c0349f04423d876236681bb022267f2adbe WatchSource:0}: Error finding container a6c922533a6b5e7efba95549240d6c0349f04423d876236681bb022267f2adbe: Status 404 returned error can't find the container with id a6c922533a6b5e7efba95549240d6c0349f04423d876236681bb022267f2adbe Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.722341 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.749404 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.768294 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.826805 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.838770 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-lkzxk"] Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.839126 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerName="dnsmasq-dns" containerID="cri-o://ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f" gracePeriod=10 Jan 28 18:58:26 crc kubenswrapper[4721]: I0128 18:58:26.853964 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:26 crc kubenswrapper[4721]: W0128 18:58:26.953854 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7a754e9_d2f7_4f43_ace9_cdc9fe613ca2.slice/crio-0b36fe5a4ec74181c68f580aa464a22e58eda141541b1a06f4013e2eb89b44b9 WatchSource:0}: Error finding container 0b36fe5a4ec74181c68f580aa464a22e58eda141541b1a06f4013e2eb89b44b9: Status 404 returned error can't find the container with id 0b36fe5a4ec74181c68f580aa464a22e58eda141541b1a06f4013e2eb89b44b9 Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.571389 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e66a8e-cc43-4a51-8a08-029ecf2ec8dd" path="/var/lib/kubelet/pods/42e66a8e-cc43-4a51-8a08-029ecf2ec8dd/volumes" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.629925 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.687516 4721 generic.go:334] "Generic (PLEG): container finished" podID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerID="ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f" exitCode=0 Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.687606 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.687678 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" event={"ID":"9b025d86-6a2c-457b-a88d-b697dabc2d7b","Type":"ContainerDied","Data":"ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.687718 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-lkzxk" event={"ID":"9b025d86-6a2c-457b-a88d-b697dabc2d7b","Type":"ContainerDied","Data":"098780222b7d6da97b376126b343886d8d5b1d0569ea49c8eef10cadc9407b6c"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.687774 4721 scope.go:117] "RemoveContainer" containerID="ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.704496 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7","Type":"ContainerStarted","Data":"44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.704548 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7","Type":"ContainerStarted","Data":"7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.704560 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7","Type":"ContainerStarted","Data":"a6c922533a6b5e7efba95549240d6c0349f04423d876236681bb022267f2adbe"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.708294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlr2g\" (UniqueName: \"kubernetes.io/projected/9b025d86-6a2c-457b-a88d-b697dabc2d7b-kube-api-access-jlr2g\") pod \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.708343 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-svc\") pod \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.708468 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-nb\") pod \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.708498 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-sb\") pod \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.708538 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-swift-storage-0\") pod \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.708736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-config\") pod \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\" (UID: \"9b025d86-6a2c-457b-a88d-b697dabc2d7b\") " Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.714060 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2","Type":"ContainerStarted","Data":"8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.714105 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2","Type":"ContainerStarted","Data":"0b36fe5a4ec74181c68f580aa464a22e58eda141541b1a06f4013e2eb89b44b9"} Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.736813 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7367871790000002 podStartE2EDuration="2.736787179s" podCreationTimestamp="2026-01-28 18:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:27.725591892 +0000 UTC m=+1473.450897452" watchObservedRunningTime="2026-01-28 18:58:27.736787179 +0000 UTC m=+1473.462092739" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.739400 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b025d86-6a2c-457b-a88d-b697dabc2d7b-kube-api-access-jlr2g" (OuterVolumeSpecName: "kube-api-access-jlr2g") pod "9b025d86-6a2c-457b-a88d-b697dabc2d7b" (UID: "9b025d86-6a2c-457b-a88d-b697dabc2d7b"). InnerVolumeSpecName "kube-api-access-jlr2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.745707 4721 scope.go:117] "RemoveContainer" containerID="ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.782055 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b025d86-6a2c-457b-a88d-b697dabc2d7b" (UID: "9b025d86-6a2c-457b-a88d-b697dabc2d7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.802985 4721 scope.go:117] "RemoveContainer" containerID="ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f" Jan 28 18:58:27 crc kubenswrapper[4721]: E0128 18:58:27.805613 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f\": container with ID starting with ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f not found: ID does not exist" containerID="ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.805665 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f"} err="failed to get container status \"ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f\": rpc error: code = NotFound desc = could not find container \"ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f\": container with ID starting with ed72807967b0f6393d5dc1397302fdcd534baedb2d5e025c40b8b2b1d3ce949f not found: ID does not exist" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.805713 4721 scope.go:117] "RemoveContainer" containerID="ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa" Jan 28 18:58:27 crc kubenswrapper[4721]: E0128 18:58:27.806038 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa\": container with ID starting with ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa not found: ID does not exist" containerID="ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.806074 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa"} err="failed to get container status \"ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa\": rpc error: code = NotFound desc = could not find container \"ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa\": container with ID starting with ddeff87eff47ea04259d6549f4c0a699ed0604b277ca4208bc4918f6e9689cfa not found: ID does not exist" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.808978 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-config" (OuterVolumeSpecName: "config") pod "9b025d86-6a2c-457b-a88d-b697dabc2d7b" (UID: "9b025d86-6a2c-457b-a88d-b697dabc2d7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.811400 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.811435 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlr2g\" (UniqueName: \"kubernetes.io/projected/9b025d86-6a2c-457b-a88d-b697dabc2d7b-kube-api-access-jlr2g\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.811450 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.818630 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b025d86-6a2c-457b-a88d-b697dabc2d7b" (UID: "9b025d86-6a2c-457b-a88d-b697dabc2d7b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.828436 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b025d86-6a2c-457b-a88d-b697dabc2d7b" (UID: "9b025d86-6a2c-457b-a88d-b697dabc2d7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.838683 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b025d86-6a2c-457b-a88d-b697dabc2d7b" (UID: "9b025d86-6a2c-457b-a88d-b697dabc2d7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.914090 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.914134 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:27 crc kubenswrapper[4721]: I0128 18:58:27.914149 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b025d86-6a2c-457b-a88d-b697dabc2d7b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:28 crc kubenswrapper[4721]: I0128 18:58:28.140250 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-lkzxk"] Jan 28 18:58:28 crc kubenswrapper[4721]: I0128 18:58:28.163225 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-lkzxk"] Jan 28 18:58:28 crc kubenswrapper[4721]: I0128 18:58:28.755819 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerStarted","Data":"f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e"} Jan 28 18:58:28 crc kubenswrapper[4721]: I0128 18:58:28.756243 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerStarted","Data":"b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f"} Jan 28 18:58:28 crc kubenswrapper[4721]: I0128 18:58:28.760915 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2","Type":"ContainerStarted","Data":"022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba"} Jan 28 18:58:28 crc kubenswrapper[4721]: I0128 18:58:28.789341 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.789318755 podStartE2EDuration="3.789318755s" podCreationTimestamp="2026-01-28 18:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:28.78037825 +0000 UTC m=+1474.505683810" watchObservedRunningTime="2026-01-28 18:58:28.789318755 +0000 UTC m=+1474.514624315" Jan 28 18:58:29 crc kubenswrapper[4721]: I0128 18:58:29.544068 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" path="/var/lib/kubelet/pods/9b025d86-6a2c-457b-a88d-b697dabc2d7b/volumes" Jan 28 18:58:29 crc kubenswrapper[4721]: I0128 18:58:29.789259 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerStarted","Data":"9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1"} Jan 28 18:58:30 crc kubenswrapper[4721]: I0128 18:58:30.773758 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:58:30 crc kubenswrapper[4721]: I0128 18:58:30.773825 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:58:30 crc kubenswrapper[4721]: I0128 18:58:30.799758 4721 generic.go:334] "Generic (PLEG): container finished" podID="fa94acc5-9ec9-4129-ac88-db06e56fa5e1" containerID="f37bc3c1b8fe009a164f59159e105ce9781f64e2db81f8802fe0c83ee99e7799" exitCode=0 Jan 28 18:58:30 crc kubenswrapper[4721]: I0128 18:58:30.799866 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" event={"ID":"fa94acc5-9ec9-4129-ac88-db06e56fa5e1","Type":"ContainerDied","Data":"f37bc3c1b8fe009a164f59159e105ce9781f64e2db81f8802fe0c83ee99e7799"} Jan 28 18:58:30 crc kubenswrapper[4721]: I0128 18:58:30.802081 4721 generic.go:334] "Generic (PLEG): container finished" podID="1a0545b1-8866-4f13-b0a4-3425a39e103d" containerID="c48c7d07a9d5bf6ea57ca99af75f3d29c355b924f6a3414c92fb6d5d564782ed" exitCode=0 Jan 28 18:58:30 crc kubenswrapper[4721]: I0128 18:58:30.802132 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gnjqs" event={"ID":"1a0545b1-8866-4f13-b0a4-3425a39e103d","Type":"ContainerDied","Data":"c48c7d07a9d5bf6ea57ca99af75f3d29c355b924f6a3414c92fb6d5d564782ed"} Jan 28 18:58:31 crc kubenswrapper[4721]: I0128 18:58:31.225827 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:58:31 crc kubenswrapper[4721]: I0128 18:58:31.226477 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:58:31 crc kubenswrapper[4721]: I0128 18:58:31.821559 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerStarted","Data":"037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3"} Jan 28 18:58:31 crc kubenswrapper[4721]: I0128 18:58:31.822212 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:58:31 crc kubenswrapper[4721]: I0128 18:58:31.857199 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.332212334 podStartE2EDuration="6.857148913s" podCreationTimestamp="2026-01-28 18:58:25 +0000 UTC" firstStartedPulling="2026-01-28 18:58:26.607998442 +0000 UTC m=+1472.333304002" lastFinishedPulling="2026-01-28 18:58:31.132935021 +0000 UTC m=+1476.858240581" observedRunningTime="2026-01-28 18:58:31.845254764 +0000 UTC m=+1477.570560334" watchObservedRunningTime="2026-01-28 18:58:31.857148913 +0000 UTC m=+1477.582454473" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.417721 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.426065 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536189 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-scripts\") pod \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536302 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmmjq\" (UniqueName: \"kubernetes.io/projected/1a0545b1-8866-4f13-b0a4-3425a39e103d-kube-api-access-qmmjq\") pod \"1a0545b1-8866-4f13-b0a4-3425a39e103d\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536336 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-config-data\") pod \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536404 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-config-data\") pod \"1a0545b1-8866-4f13-b0a4-3425a39e103d\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536426 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-combined-ca-bundle\") pod \"1a0545b1-8866-4f13-b0a4-3425a39e103d\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536461 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-combined-ca-bundle\") pod \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536488 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5s2q\" (UniqueName: \"kubernetes.io/projected/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-kube-api-access-z5s2q\") pod \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\" (UID: \"fa94acc5-9ec9-4129-ac88-db06e56fa5e1\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.536567 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-scripts\") pod \"1a0545b1-8866-4f13-b0a4-3425a39e103d\" (UID: \"1a0545b1-8866-4f13-b0a4-3425a39e103d\") " Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.543688 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0545b1-8866-4f13-b0a4-3425a39e103d-kube-api-access-qmmjq" (OuterVolumeSpecName: "kube-api-access-qmmjq") pod "1a0545b1-8866-4f13-b0a4-3425a39e103d" (UID: "1a0545b1-8866-4f13-b0a4-3425a39e103d"). InnerVolumeSpecName "kube-api-access-qmmjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.544127 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-scripts" (OuterVolumeSpecName: "scripts") pod "1a0545b1-8866-4f13-b0a4-3425a39e103d" (UID: "1a0545b1-8866-4f13-b0a4-3425a39e103d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.544351 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-kube-api-access-z5s2q" (OuterVolumeSpecName: "kube-api-access-z5s2q") pod "fa94acc5-9ec9-4129-ac88-db06e56fa5e1" (UID: "fa94acc5-9ec9-4129-ac88-db06e56fa5e1"). InnerVolumeSpecName "kube-api-access-z5s2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.544575 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-scripts" (OuterVolumeSpecName: "scripts") pod "fa94acc5-9ec9-4129-ac88-db06e56fa5e1" (UID: "fa94acc5-9ec9-4129-ac88-db06e56fa5e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.571073 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a0545b1-8866-4f13-b0a4-3425a39e103d" (UID: "1a0545b1-8866-4f13-b0a4-3425a39e103d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.572589 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa94acc5-9ec9-4129-ac88-db06e56fa5e1" (UID: "fa94acc5-9ec9-4129-ac88-db06e56fa5e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.576405 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-config-data" (OuterVolumeSpecName: "config-data") pod "1a0545b1-8866-4f13-b0a4-3425a39e103d" (UID: "1a0545b1-8866-4f13-b0a4-3425a39e103d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.584463 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-config-data" (OuterVolumeSpecName: "config-data") pod "fa94acc5-9ec9-4129-ac88-db06e56fa5e1" (UID: "fa94acc5-9ec9-4129-ac88-db06e56fa5e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639207 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639242 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639252 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmmjq\" (UniqueName: \"kubernetes.io/projected/1a0545b1-8866-4f13-b0a4-3425a39e103d-kube-api-access-qmmjq\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639265 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639275 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639285 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0545b1-8866-4f13-b0a4-3425a39e103d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639294 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.639308 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5s2q\" (UniqueName: \"kubernetes.io/projected/fa94acc5-9ec9-4129-ac88-db06e56fa5e1-kube-api-access-z5s2q\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.839663 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gnjqs" event={"ID":"1a0545b1-8866-4f13-b0a4-3425a39e103d","Type":"ContainerDied","Data":"e8bf92191342a10be8c3051651fc40be31879792cebc41e3a4c4b989e620235a"} Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.839979 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8bf92191342a10be8c3051651fc40be31879792cebc41e3a4c4b989e620235a" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.840085 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gnjqs" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.843152 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" event={"ID":"fa94acc5-9ec9-4129-ac88-db06e56fa5e1","Type":"ContainerDied","Data":"75f264f2602339855514a95ed802d3f739f23396e590d138c2a898233eb547e5"} Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.843211 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f264f2602339855514a95ed802d3f739f23396e590d138c2a898233eb547e5" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.843546 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhzxr" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.929429 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:58:32 crc kubenswrapper[4721]: E0128 18:58:32.930088 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerName="init" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930116 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerName="init" Jan 28 18:58:32 crc kubenswrapper[4721]: E0128 18:58:32.930136 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0545b1-8866-4f13-b0a4-3425a39e103d" containerName="nova-manage" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930145 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0545b1-8866-4f13-b0a4-3425a39e103d" containerName="nova-manage" Jan 28 18:58:32 crc kubenswrapper[4721]: E0128 18:58:32.930159 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerName="dnsmasq-dns" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930273 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerName="dnsmasq-dns" Jan 28 18:58:32 crc kubenswrapper[4721]: E0128 18:58:32.930326 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa94acc5-9ec9-4129-ac88-db06e56fa5e1" containerName="nova-cell1-conductor-db-sync" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930336 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa94acc5-9ec9-4129-ac88-db06e56fa5e1" containerName="nova-cell1-conductor-db-sync" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930663 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0545b1-8866-4f13-b0a4-3425a39e103d" containerName="nova-manage" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930696 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b025d86-6a2c-457b-a88d-b697dabc2d7b" containerName="dnsmasq-dns" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.930709 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa94acc5-9ec9-4129-ac88-db06e56fa5e1" containerName="nova-cell1-conductor-db-sync" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.931730 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.938832 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:58:32 crc kubenswrapper[4721]: I0128 18:58:32.942256 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.050709 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qxf\" (UniqueName: \"kubernetes.io/projected/d175789e-d718-4022-86ac-b8b1f9f1d40c-kube-api-access-22qxf\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.051258 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d175789e-d718-4022-86ac-b8b1f9f1d40c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.051591 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d175789e-d718-4022-86ac-b8b1f9f1d40c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.154435 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d175789e-d718-4022-86ac-b8b1f9f1d40c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.154551 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d175789e-d718-4022-86ac-b8b1f9f1d40c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.154620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qxf\" (UniqueName: \"kubernetes.io/projected/d175789e-d718-4022-86ac-b8b1f9f1d40c-kube-api-access-22qxf\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.159143 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d175789e-d718-4022-86ac-b8b1f9f1d40c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.159459 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d175789e-d718-4022-86ac-b8b1f9f1d40c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.177342 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qxf\" (UniqueName: \"kubernetes.io/projected/d175789e-d718-4022-86ac-b8b1f9f1d40c-kube-api-access-22qxf\") pod \"nova-cell1-conductor-0\" (UID: \"d175789e-d718-4022-86ac-b8b1f9f1d40c\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.257277 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.768712 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:58:33 crc kubenswrapper[4721]: W0128 18:58:33.782413 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd175789e_d718_4022_86ac_b8b1f9f1d40c.slice/crio-8cde4ced4c0471e8ade2424e0aad21154476c8eed8591658f2fb59f9271bae58 WatchSource:0}: Error finding container 8cde4ced4c0471e8ade2424e0aad21154476c8eed8591658f2fb59f9271bae58: Status 404 returned error can't find the container with id 8cde4ced4c0471e8ade2424e0aad21154476c8eed8591658f2fb59f9271bae58 Jan 28 18:58:33 crc kubenswrapper[4721]: I0128 18:58:33.859751 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d175789e-d718-4022-86ac-b8b1f9f1d40c","Type":"ContainerStarted","Data":"8cde4ced4c0471e8ade2424e0aad21154476c8eed8591658f2fb59f9271bae58"} Jan 28 18:58:34 crc kubenswrapper[4721]: I0128 18:58:34.873207 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d175789e-d718-4022-86ac-b8b1f9f1d40c","Type":"ContainerStarted","Data":"6fe663d41431c3e055887c5935e50679ff8f41f9ea68afe0445c3b513ce8022d"} Jan 28 18:58:34 crc kubenswrapper[4721]: I0128 18:58:34.873808 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:34 crc kubenswrapper[4721]: I0128 18:58:34.895761 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8957339600000003 podStartE2EDuration="2.89573396s" podCreationTimestamp="2026-01-28 18:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:34.89450056 +0000 UTC m=+1480.619806130" watchObservedRunningTime="2026-01-28 18:58:34.89573396 +0000 UTC m=+1480.621039520" Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.220682 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.731413 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.732003 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-log" containerID="cri-o://8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500" gracePeriod=30 Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.732205 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-api" containerID="cri-o://022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba" gracePeriod=30 Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.774440 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.774934 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.819769 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.884496 4721 generic.go:334] "Generic (PLEG): container finished" podID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerID="8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500" exitCode=143 Jan 28 18:58:35 crc kubenswrapper[4721]: I0128 18:58:35.884578 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2","Type":"ContainerDied","Data":"8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500"} Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.499881 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.542832 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-logs\") pod \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.543082 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-config-data\") pod \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.543138 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-combined-ca-bundle\") pod \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.543290 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j29g6\" (UniqueName: \"kubernetes.io/projected/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-kube-api-access-j29g6\") pod \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\" (UID: \"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2\") " Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.550751 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-logs" (OuterVolumeSpecName: "logs") pod "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" (UID: "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.555758 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-kube-api-access-j29g6" (OuterVolumeSpecName: "kube-api-access-j29g6") pod "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" (UID: "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2"). InnerVolumeSpecName "kube-api-access-j29g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.577569 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-config-data" (OuterVolumeSpecName: "config-data") pod "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" (UID: "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.591256 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" (UID: "a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.647864 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j29g6\" (UniqueName: \"kubernetes.io/projected/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-kube-api-access-j29g6\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.647895 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.647906 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.647917 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.795442 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.795439 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.899415 4721 generic.go:334] "Generic (PLEG): container finished" podID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerID="022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba" exitCode=0 Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.899499 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.899509 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2","Type":"ContainerDied","Data":"022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba"} Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.899889 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2","Type":"ContainerDied","Data":"0b36fe5a4ec74181c68f580aa464a22e58eda141541b1a06f4013e2eb89b44b9"} Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.899922 4721 scope.go:117] "RemoveContainer" containerID="022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.900107 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-log" containerID="cri-o://7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9" gracePeriod=30 Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.900139 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-metadata" containerID="cri-o://44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582" gracePeriod=30 Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.941457 4721 scope.go:117] "RemoveContainer" containerID="8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.946913 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.959451 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.970742 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:36 crc kubenswrapper[4721]: E0128 18:58:36.971196 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-api" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.971212 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-api" Jan 28 18:58:36 crc kubenswrapper[4721]: E0128 18:58:36.971258 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-log" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.971264 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-log" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.971575 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-api" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.971595 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" containerName="nova-api-log" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.973238 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.977983 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.983772 4721 scope.go:117] "RemoveContainer" containerID="022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba" Jan 28 18:58:36 crc kubenswrapper[4721]: E0128 18:58:36.988694 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba\": container with ID starting with 022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba not found: ID does not exist" containerID="022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.988743 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba"} err="failed to get container status \"022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba\": rpc error: code = NotFound desc = could not find container \"022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba\": container with ID starting with 022e98632fa1587a568daddf247f3d09522dd5c52da4f9e1fbd734e546563fba not found: ID does not exist" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.988796 4721 scope.go:117] "RemoveContainer" containerID="8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500" Jan 28 18:58:36 crc kubenswrapper[4721]: E0128 18:58:36.989832 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500\": container with ID starting with 8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500 not found: ID does not exist" containerID="8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.989897 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500"} err="failed to get container status \"8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500\": rpc error: code = NotFound desc = could not find container \"8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500\": container with ID starting with 8818abd956065b520e8378ab62ead74ede20a8cabf4263d6b313fb7d80392500 not found: ID does not exist" Jan 28 18:58:36 crc kubenswrapper[4721]: I0128 18:58:36.996742 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.071836 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3607b401-8924-423a-af9f-4d76cbb67a0b-logs\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.072006 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.072060 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-config-data\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.072186 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc54k\" (UniqueName: \"kubernetes.io/projected/3607b401-8924-423a-af9f-4d76cbb67a0b-kube-api-access-xc54k\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.174799 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.174875 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-config-data\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.174966 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc54k\" (UniqueName: \"kubernetes.io/projected/3607b401-8924-423a-af9f-4d76cbb67a0b-kube-api-access-xc54k\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.175005 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3607b401-8924-423a-af9f-4d76cbb67a0b-logs\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.175643 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3607b401-8924-423a-af9f-4d76cbb67a0b-logs\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.180827 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-config-data\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.191099 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.195778 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc54k\" (UniqueName: \"kubernetes.io/projected/3607b401-8924-423a-af9f-4d76cbb67a0b-kube-api-access-xc54k\") pod \"nova-api-0\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.377347 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.548216 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2" path="/var/lib/kubelet/pods/a7a754e9-d2f7-4f43-ace9-cdc9fe613ca2/volumes" Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.877142 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:58:37 crc kubenswrapper[4721]: W0128 18:58:37.884925 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3607b401_8924_423a_af9f_4d76cbb67a0b.slice/crio-8fc1288dafac6269ebd90032242cdaffadb8f84cf09278daeb0617c76ee74c16 WatchSource:0}: Error finding container 8fc1288dafac6269ebd90032242cdaffadb8f84cf09278daeb0617c76ee74c16: Status 404 returned error can't find the container with id 8fc1288dafac6269ebd90032242cdaffadb8f84cf09278daeb0617c76ee74c16 Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.913920 4721 generic.go:334] "Generic (PLEG): container finished" podID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerID="7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9" exitCode=143 Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.914047 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7","Type":"ContainerDied","Data":"7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9"} Jan 28 18:58:37 crc kubenswrapper[4721]: I0128 18:58:37.916942 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3607b401-8924-423a-af9f-4d76cbb67a0b","Type":"ContainerStarted","Data":"8fc1288dafac6269ebd90032242cdaffadb8f84cf09278daeb0617c76ee74c16"} Jan 28 18:58:38 crc kubenswrapper[4721]: I0128 18:58:38.930965 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3607b401-8924-423a-af9f-4d76cbb67a0b","Type":"ContainerStarted","Data":"8e53db6ea2df45f6cab856eda77131f56223fcc51d3af29e436575c4cdf567ba"} Jan 28 18:58:38 crc kubenswrapper[4721]: I0128 18:58:38.931343 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3607b401-8924-423a-af9f-4d76cbb67a0b","Type":"ContainerStarted","Data":"2f79f11d8ab6905ded8a2d156a56904835d55c5c66f3db4615248f8d2f5e771f"} Jan 28 18:58:38 crc kubenswrapper[4721]: I0128 18:58:38.956221 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.956201674 podStartE2EDuration="2.956201674s" podCreationTimestamp="2026-01-28 18:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:38.948395275 +0000 UTC m=+1484.673700835" watchObservedRunningTime="2026-01-28 18:58:38.956201674 +0000 UTC m=+1484.681507234" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.781601 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.806726 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-combined-ca-bundle\") pod \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.806789 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-logs\") pod \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.807321 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-logs" (OuterVolumeSpecName: "logs") pod "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" (UID: "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.849390 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" (UID: "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.908790 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5crl\" (UniqueName: \"kubernetes.io/projected/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-kube-api-access-g5crl\") pod \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.908907 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-nova-metadata-tls-certs\") pod \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.908994 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-config-data\") pod \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\" (UID: \"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7\") " Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.909733 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.909755 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.911999 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-kube-api-access-g5crl" (OuterVolumeSpecName: "kube-api-access-g5crl") pod "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" (UID: "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7"). InnerVolumeSpecName "kube-api-access-g5crl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.954150 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-config-data" (OuterVolumeSpecName: "config-data") pod "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" (UID: "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.974923 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" (UID: "11f3cb78-241f-4c92-8d2e-0bca68c3a7f7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.977979 4721 generic.go:334] "Generic (PLEG): container finished" podID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerID="44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582" exitCode=0 Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.978253 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7","Type":"ContainerDied","Data":"44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582"} Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.978384 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"11f3cb78-241f-4c92-8d2e-0bca68c3a7f7","Type":"ContainerDied","Data":"a6c922533a6b5e7efba95549240d6c0349f04423d876236681bb022267f2adbe"} Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.978484 4721 scope.go:117] "RemoveContainer" containerID="44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582" Jan 28 18:58:42 crc kubenswrapper[4721]: I0128 18:58:42.978970 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.027992 4721 scope.go:117] "RemoveContainer" containerID="7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.033709 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5crl\" (UniqueName: \"kubernetes.io/projected/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-kube-api-access-g5crl\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.033759 4721 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.033776 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.061761 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.082017 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.097613 4721 scope.go:117] "RemoveContainer" containerID="44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582" Jan 28 18:58:43 crc kubenswrapper[4721]: E0128 18:58:43.098256 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582\": container with ID starting with 44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582 not found: ID does not exist" containerID="44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.098315 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582"} err="failed to get container status \"44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582\": rpc error: code = NotFound desc = could not find container \"44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582\": container with ID starting with 44fd2d4518f998b17b59900ae1c655ecd713f901971a38528d4677bd09299582 not found: ID does not exist" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.098350 4721 scope.go:117] "RemoveContainer" containerID="7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9" Jan 28 18:58:43 crc kubenswrapper[4721]: E0128 18:58:43.098645 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9\": container with ID starting with 7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9 not found: ID does not exist" containerID="7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.098670 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9"} err="failed to get container status \"7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9\": rpc error: code = NotFound desc = could not find container \"7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9\": container with ID starting with 7d75ddaac91c08bfa0369d3822159d3952e23c3379a9366a637a34ff690ee3a9 not found: ID does not exist" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.103739 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:43 crc kubenswrapper[4721]: E0128 18:58:43.104534 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-log" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.104563 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-log" Jan 28 18:58:43 crc kubenswrapper[4721]: E0128 18:58:43.104583 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-metadata" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.104592 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-metadata" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.104861 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-metadata" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.104896 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" containerName="nova-metadata-log" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.109361 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.113254 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.113805 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.118453 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.141659 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.141729 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af0e32a0-15f5-49b3-adca-4e9b1040f218-logs\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.141800 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.142260 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfs7t\" (UniqueName: \"kubernetes.io/projected/af0e32a0-15f5-49b3-adca-4e9b1040f218-kube-api-access-hfs7t\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.142452 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-config-data\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.244946 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfs7t\" (UniqueName: \"kubernetes.io/projected/af0e32a0-15f5-49b3-adca-4e9b1040f218-kube-api-access-hfs7t\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.245125 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-config-data\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.245250 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.245312 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af0e32a0-15f5-49b3-adca-4e9b1040f218-logs\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.245404 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.245836 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af0e32a0-15f5-49b3-adca-4e9b1040f218-logs\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.249487 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-config-data\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.250530 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.250601 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.264346 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfs7t\" (UniqueName: \"kubernetes.io/projected/af0e32a0-15f5-49b3-adca-4e9b1040f218-kube-api-access-hfs7t\") pod \"nova-metadata-0\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.289601 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.440654 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:58:43 crc kubenswrapper[4721]: I0128 18:58:43.568236 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f3cb78-241f-4c92-8d2e-0bca68c3a7f7" path="/var/lib/kubelet/pods/11f3cb78-241f-4c92-8d2e-0bca68c3a7f7/volumes" Jan 28 18:58:44 crc kubenswrapper[4721]: I0128 18:58:43.878582 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:58:44 crc kubenswrapper[4721]: I0128 18:58:43.998413 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af0e32a0-15f5-49b3-adca-4e9b1040f218","Type":"ContainerStarted","Data":"7faf36e71d6dc3b7c2ee3e897b9ae0c255195bc9dbc1135c579969cb5d7069e4"} Jan 28 18:58:45 crc kubenswrapper[4721]: I0128 18:58:45.010673 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af0e32a0-15f5-49b3-adca-4e9b1040f218","Type":"ContainerStarted","Data":"84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9"} Jan 28 18:58:45 crc kubenswrapper[4721]: I0128 18:58:45.010932 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af0e32a0-15f5-49b3-adca-4e9b1040f218","Type":"ContainerStarted","Data":"95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311"} Jan 28 18:58:45 crc kubenswrapper[4721]: I0128 18:58:45.032419 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.032397803 podStartE2EDuration="2.032397803s" podCreationTimestamp="2026-01-28 18:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:45.029100408 +0000 UTC m=+1490.754405968" watchObservedRunningTime="2026-01-28 18:58:45.032397803 +0000 UTC m=+1490.757703363" Jan 28 18:58:47 crc kubenswrapper[4721]: I0128 18:58:47.377715 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:58:47 crc kubenswrapper[4721]: I0128 18:58:47.378220 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:58:48 crc kubenswrapper[4721]: I0128 18:58:48.441248 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:58:48 crc kubenswrapper[4721]: I0128 18:58:48.441626 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:58:48 crc kubenswrapper[4721]: I0128 18:58:48.461494 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:58:48 crc kubenswrapper[4721]: I0128 18:58:48.461914 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:58:53 crc kubenswrapper[4721]: I0128 18:58:53.442133 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:58:53 crc kubenswrapper[4721]: I0128 18:58:53.442732 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:58:53 crc kubenswrapper[4721]: I0128 18:58:53.882690 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.006317 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-combined-ca-bundle\") pod \"228ac5c0-6690-4954-837e-952891b36a1d\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.006522 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-config-data\") pod \"228ac5c0-6690-4954-837e-952891b36a1d\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.006550 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/228ac5c0-6690-4954-837e-952891b36a1d-kube-api-access-hn96b\") pod \"228ac5c0-6690-4954-837e-952891b36a1d\" (UID: \"228ac5c0-6690-4954-837e-952891b36a1d\") " Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.018680 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228ac5c0-6690-4954-837e-952891b36a1d-kube-api-access-hn96b" (OuterVolumeSpecName: "kube-api-access-hn96b") pod "228ac5c0-6690-4954-837e-952891b36a1d" (UID: "228ac5c0-6690-4954-837e-952891b36a1d"). InnerVolumeSpecName "kube-api-access-hn96b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.047017 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-config-data" (OuterVolumeSpecName: "config-data") pod "228ac5c0-6690-4954-837e-952891b36a1d" (UID: "228ac5c0-6690-4954-837e-952891b36a1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.047939 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "228ac5c0-6690-4954-837e-952891b36a1d" (UID: "228ac5c0-6690-4954-837e-952891b36a1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.100098 4721 generic.go:334] "Generic (PLEG): container finished" podID="da88710a-992e-46e0-abe2-8b7c8390f54f" containerID="cee1bd39ce919d92de23bfdc6ec78393295deb10adae7e6898343d527cb44555" exitCode=137 Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.100160 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da88710a-992e-46e0-abe2-8b7c8390f54f","Type":"ContainerDied","Data":"cee1bd39ce919d92de23bfdc6ec78393295deb10adae7e6898343d527cb44555"} Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.100200 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da88710a-992e-46e0-abe2-8b7c8390f54f","Type":"ContainerDied","Data":"85e3c4229263c60d3af287f42a6c6210b43f08fecef72dc3e4ed48c5778435a4"} Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.100212 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e3c4229263c60d3af287f42a6c6210b43f08fecef72dc3e4ed48c5778435a4" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.101278 4721 generic.go:334] "Generic (PLEG): container finished" podID="228ac5c0-6690-4954-837e-952891b36a1d" containerID="59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438" exitCode=137 Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.101303 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228ac5c0-6690-4954-837e-952891b36a1d","Type":"ContainerDied","Data":"59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438"} Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.101320 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"228ac5c0-6690-4954-837e-952891b36a1d","Type":"ContainerDied","Data":"298449db1feafb16db43b1584f66afd58efbbdfa0120a98ec056c73f592e4da7"} Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.101337 4721 scope.go:117] "RemoveContainer" containerID="59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.101463 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.110154 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.110216 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228ac5c0-6690-4954-837e-952891b36a1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.110231 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/228ac5c0-6690-4954-837e-952891b36a1d-kube-api-access-hn96b\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.147344 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.156344 4721 scope.go:117] "RemoveContainer" containerID="59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438" Jan 28 18:58:54 crc kubenswrapper[4721]: E0128 18:58:54.156900 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438\": container with ID starting with 59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438 not found: ID does not exist" containerID="59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.156953 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438"} err="failed to get container status \"59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438\": rpc error: code = NotFound desc = could not find container \"59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438\": container with ID starting with 59782e89c7d2bac01da14d0d61dc4af55574f8789ef5f28d2be51692c1c18438 not found: ID does not exist" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.168066 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.196123 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.215105 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-config-data\") pod \"da88710a-992e-46e0-abe2-8b7c8390f54f\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.215230 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-combined-ca-bundle\") pod \"da88710a-992e-46e0-abe2-8b7c8390f54f\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.215595 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9j58\" (UniqueName: \"kubernetes.io/projected/da88710a-992e-46e0-abe2-8b7c8390f54f-kube-api-access-n9j58\") pod \"da88710a-992e-46e0-abe2-8b7c8390f54f\" (UID: \"da88710a-992e-46e0-abe2-8b7c8390f54f\") " Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.216367 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:54 crc kubenswrapper[4721]: E0128 18:58:54.216976 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228ac5c0-6690-4954-837e-952891b36a1d" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.217003 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="228ac5c0-6690-4954-837e-952891b36a1d" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:58:54 crc kubenswrapper[4721]: E0128 18:58:54.217057 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da88710a-992e-46e0-abe2-8b7c8390f54f" containerName="nova-scheduler-scheduler" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.217066 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="da88710a-992e-46e0-abe2-8b7c8390f54f" containerName="nova-scheduler-scheduler" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.218671 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="228ac5c0-6690-4954-837e-952891b36a1d" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.218707 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="da88710a-992e-46e0-abe2-8b7c8390f54f" containerName="nova-scheduler-scheduler" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.219744 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da88710a-992e-46e0-abe2-8b7c8390f54f-kube-api-access-n9j58" (OuterVolumeSpecName: "kube-api-access-n9j58") pod "da88710a-992e-46e0-abe2-8b7c8390f54f" (UID: "da88710a-992e-46e0-abe2-8b7c8390f54f"). InnerVolumeSpecName "kube-api-access-n9j58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.221125 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.225293 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.226992 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.227191 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.236269 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.262521 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-config-data" (OuterVolumeSpecName: "config-data") pod "da88710a-992e-46e0-abe2-8b7c8390f54f" (UID: "da88710a-992e-46e0-abe2-8b7c8390f54f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.263261 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da88710a-992e-46e0-abe2-8b7c8390f54f" (UID: "da88710a-992e-46e0-abe2-8b7c8390f54f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.318576 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.318974 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.319042 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.319117 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.319462 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plj9q\" (UniqueName: \"kubernetes.io/projected/623ce0b7-2228-4d75-a8c3-48a837fccf46-kube-api-access-plj9q\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.319701 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.319729 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da88710a-992e-46e0-abe2-8b7c8390f54f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.319745 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9j58\" (UniqueName: \"kubernetes.io/projected/da88710a-992e-46e0-abe2-8b7c8390f54f-kube-api-access-n9j58\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.421766 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.421876 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plj9q\" (UniqueName: \"kubernetes.io/projected/623ce0b7-2228-4d75-a8c3-48a837fccf46-kube-api-access-plj9q\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.422464 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.423014 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.423231 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.436110 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.436186 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.436254 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.436678 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/623ce0b7-2228-4d75-a8c3-48a837fccf46-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.439070 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plj9q\" (UniqueName: \"kubernetes.io/projected/623ce0b7-2228-4d75-a8c3-48a837fccf46-kube-api-access-plj9q\") pod \"nova-cell1-novncproxy-0\" (UID: \"623ce0b7-2228-4d75-a8c3-48a837fccf46\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.455448 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.455519 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:58:54 crc kubenswrapper[4721]: I0128 18:58:54.574850 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.075379 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.111043 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"623ce0b7-2228-4d75-a8c3-48a837fccf46","Type":"ContainerStarted","Data":"b49f8f8ef889301835b390a4a5a535f6d8cbbc386cbf1117ef215bd5b561e6e6"} Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.112761 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.218553 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.236423 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.251070 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.253253 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.256541 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.264765 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.344292 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.344359 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-config-data\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.344742 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7wxk\" (UniqueName: \"kubernetes.io/projected/22941a54-ca5a-4905-8d65-c8724f519090-kube-api-access-b7wxk\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.447389 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.447453 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-config-data\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.447500 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7wxk\" (UniqueName: \"kubernetes.io/projected/22941a54-ca5a-4905-8d65-c8724f519090-kube-api-access-b7wxk\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.452639 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-config-data\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.453399 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.466234 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7wxk\" (UniqueName: \"kubernetes.io/projected/22941a54-ca5a-4905-8d65-c8724f519090-kube-api-access-b7wxk\") pod \"nova-scheduler-0\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.546658 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228ac5c0-6690-4954-837e-952891b36a1d" path="/var/lib/kubelet/pods/228ac5c0-6690-4954-837e-952891b36a1d/volumes" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.548082 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da88710a-992e-46e0-abe2-8b7c8390f54f" path="/var/lib/kubelet/pods/da88710a-992e-46e0-abe2-8b7c8390f54f/volumes" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.596830 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:58:55 crc kubenswrapper[4721]: I0128 18:58:55.715843 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:58:56 crc kubenswrapper[4721]: I0128 18:58:56.106748 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:58:56 crc kubenswrapper[4721]: I0128 18:58:56.127470 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"623ce0b7-2228-4d75-a8c3-48a837fccf46","Type":"ContainerStarted","Data":"98761bfd7fd45c4403b5b927010aef3b1d0abcaa0678cb5c496771add1e76616"} Jan 28 18:58:56 crc kubenswrapper[4721]: I0128 18:58:56.130470 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22941a54-ca5a-4905-8d65-c8724f519090","Type":"ContainerStarted","Data":"63f6a475979d58f8ccd5e7a837e42b6abe64a953339c798e7c8a5c2305337303"} Jan 28 18:58:56 crc kubenswrapper[4721]: I0128 18:58:56.150972 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.15092953 podStartE2EDuration="2.15092953s" podCreationTimestamp="2026-01-28 18:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:56.145639062 +0000 UTC m=+1501.870944632" watchObservedRunningTime="2026-01-28 18:58:56.15092953 +0000 UTC m=+1501.876235090" Jan 28 18:58:57 crc kubenswrapper[4721]: I0128 18:58:57.163429 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22941a54-ca5a-4905-8d65-c8724f519090","Type":"ContainerStarted","Data":"6b9b7fa93e87409f1e12b346ccc49c5d577d1ea9b12e593afeafba1a95e005b7"} Jan 28 18:58:57 crc kubenswrapper[4721]: I0128 18:58:57.196065 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.196042528 podStartE2EDuration="2.196042528s" podCreationTimestamp="2026-01-28 18:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:58:57.186064402 +0000 UTC m=+1502.911369982" watchObservedRunningTime="2026-01-28 18:58:57.196042528 +0000 UTC m=+1502.921348088" Jan 28 18:58:57 crc kubenswrapper[4721]: I0128 18:58:57.384662 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:58:57 crc kubenswrapper[4721]: I0128 18:58:57.385066 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:58:57 crc kubenswrapper[4721]: I0128 18:58:57.387377 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:58:57 crc kubenswrapper[4721]: I0128 18:58:57.392456 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.173909 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.181216 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.372318 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mx67n"] Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.375117 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.399430 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mx67n"] Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.435774 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.436119 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-config\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.436312 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.436538 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.436934 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldhq\" (UniqueName: \"kubernetes.io/projected/4bc30432-0868-448c-b124-8b9db2d2a6b2-kube-api-access-wldhq\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.437092 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.539288 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.539378 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.539441 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-config\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.539477 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.539532 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.539633 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldhq\" (UniqueName: \"kubernetes.io/projected/4bc30432-0868-448c-b124-8b9db2d2a6b2-kube-api-access-wldhq\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.540985 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.541583 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.542357 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.542464 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-config\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.542546 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.574292 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldhq\" (UniqueName: \"kubernetes.io/projected/4bc30432-0868-448c-b124-8b9db2d2a6b2-kube-api-access-wldhq\") pod \"dnsmasq-dns-5fd9b586ff-mx67n\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:58 crc kubenswrapper[4721]: I0128 18:58:58.765119 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.273510 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mx67n"] Jan 28 18:58:59 crc kubenswrapper[4721]: W0128 18:58:59.281388 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc30432_0868_448c_b124_8b9db2d2a6b2.slice/crio-fd7756364455cfa898557a021743f7faa24986d8b67a7daaf0d8af72059547c4 WatchSource:0}: Error finding container fd7756364455cfa898557a021743f7faa24986d8b67a7daaf0d8af72059547c4: Status 404 returned error can't find the container with id fd7756364455cfa898557a021743f7faa24986d8b67a7daaf0d8af72059547c4 Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.575147 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.979609 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.980051 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-central-agent" containerID="cri-o://f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e" gracePeriod=30 Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.980633 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="proxy-httpd" containerID="cri-o://037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3" gracePeriod=30 Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.980948 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="sg-core" containerID="cri-o://9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1" gracePeriod=30 Jan 28 18:58:59 crc kubenswrapper[4721]: I0128 18:58:59.981025 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-notification-agent" containerID="cri-o://b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f" gracePeriod=30 Jan 28 18:59:00 crc kubenswrapper[4721]: I0128 18:59:00.203787 4721 generic.go:334] "Generic (PLEG): container finished" podID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerID="c519cdf147973b988142d693d0e7b374342906ea5eab4c1b7dba7fe8a570693f" exitCode=0 Jan 28 18:59:00 crc kubenswrapper[4721]: I0128 18:59:00.204230 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" event={"ID":"4bc30432-0868-448c-b124-8b9db2d2a6b2","Type":"ContainerDied","Data":"c519cdf147973b988142d693d0e7b374342906ea5eab4c1b7dba7fe8a570693f"} Jan 28 18:59:00 crc kubenswrapper[4721]: I0128 18:59:00.204273 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" event={"ID":"4bc30432-0868-448c-b124-8b9db2d2a6b2","Type":"ContainerStarted","Data":"fd7756364455cfa898557a021743f7faa24986d8b67a7daaf0d8af72059547c4"} Jan 28 18:59:00 crc kubenswrapper[4721]: I0128 18:59:00.214326 4721 generic.go:334] "Generic (PLEG): container finished" podID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerID="9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1" exitCode=2 Jan 28 18:59:00 crc kubenswrapper[4721]: I0128 18:59:00.215613 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerDied","Data":"9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1"} Jan 28 18:59:00 crc kubenswrapper[4721]: I0128 18:59:00.598041 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.225972 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.226518 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.226623 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.229331 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.229420 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" gracePeriod=600 Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.246503 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" event={"ID":"4bc30432-0868-448c-b124-8b9db2d2a6b2","Type":"ContainerStarted","Data":"64a92fda9552be03fcca0561239e0c782cdd2538b99c6270cae1e5419793eef2"} Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.246626 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.251816 4721 generic.go:334] "Generic (PLEG): container finished" podID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerID="037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3" exitCode=0 Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.251850 4721 generic.go:334] "Generic (PLEG): container finished" podID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerID="f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e" exitCode=0 Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.251871 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerDied","Data":"037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3"} Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.251922 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerDied","Data":"f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e"} Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.273635 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" podStartSLOduration=3.27361454 podStartE2EDuration="3.27361454s" podCreationTimestamp="2026-01-28 18:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:59:01.271518393 +0000 UTC m=+1506.996823953" watchObservedRunningTime="2026-01-28 18:59:01.27361454 +0000 UTC m=+1506.998920100" Jan 28 18:59:01 crc kubenswrapper[4721]: E0128 18:59:01.387348 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.896144 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.939855 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdpt6\" (UniqueName: \"kubernetes.io/projected/4f529a56-ccec-4eed-9a56-094d3ada74a3-kube-api-access-xdpt6\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.939961 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-run-httpd\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940046 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-combined-ca-bundle\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940229 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-scripts\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940263 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-config-data\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940301 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-sg-core-conf-yaml\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940338 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-log-httpd\") pod \"4f529a56-ccec-4eed-9a56-094d3ada74a3\" (UID: \"4f529a56-ccec-4eed-9a56-094d3ada74a3\") " Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940416 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.940910 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.941043 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.950693 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-scripts" (OuterVolumeSpecName: "scripts") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.953369 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f529a56-ccec-4eed-9a56-094d3ada74a3-kube-api-access-xdpt6" (OuterVolumeSpecName: "kube-api-access-xdpt6") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "kube-api-access-xdpt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.996090 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.996345 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-log" containerID="cri-o://2f79f11d8ab6905ded8a2d156a56904835d55c5c66f3db4615248f8d2f5e771f" gracePeriod=30 Jan 28 18:59:01 crc kubenswrapper[4721]: I0128 18:59:01.996851 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-api" containerID="cri-o://8e53db6ea2df45f6cab856eda77131f56223fcc51d3af29e436575c4cdf567ba" gracePeriod=30 Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.019972 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.044142 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f529a56-ccec-4eed-9a56-094d3ada74a3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.044189 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdpt6\" (UniqueName: \"kubernetes.io/projected/4f529a56-ccec-4eed-9a56-094d3ada74a3-kube-api-access-xdpt6\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.044201 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.044215 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.095075 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.146706 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.179437 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-config-data" (OuterVolumeSpecName: "config-data") pod "4f529a56-ccec-4eed-9a56-094d3ada74a3" (UID: "4f529a56-ccec-4eed-9a56-094d3ada74a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.249155 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f529a56-ccec-4eed-9a56-094d3ada74a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.267958 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" exitCode=0 Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.268034 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070"} Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.268071 4721 scope.go:117] "RemoveContainer" containerID="550b2d16893b3820a2b08c43cf1c1d92f4cff5c63dda2753410f76f8e772711f" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.268844 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.269356 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.275017 4721 generic.go:334] "Generic (PLEG): container finished" podID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerID="b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f" exitCode=0 Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.275094 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerDied","Data":"b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f"} Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.275126 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f529a56-ccec-4eed-9a56-094d3ada74a3","Type":"ContainerDied","Data":"0e8abc0859bacd8d4380311bca72de563baae8b8cbe87b10144b896384888ab2"} Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.275220 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.278612 4721 generic.go:334] "Generic (PLEG): container finished" podID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerID="2f79f11d8ab6905ded8a2d156a56904835d55c5c66f3db4615248f8d2f5e771f" exitCode=143 Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.279955 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3607b401-8924-423a-af9f-4d76cbb67a0b","Type":"ContainerDied","Data":"2f79f11d8ab6905ded8a2d156a56904835d55c5c66f3db4615248f8d2f5e771f"} Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.305182 4721 scope.go:117] "RemoveContainer" containerID="037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.326227 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.366093 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.370940 4721 scope.go:117] "RemoveContainer" containerID="9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.390250 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.390905 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-central-agent" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.390928 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-central-agent" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.390953 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="sg-core" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.390964 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="sg-core" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.390980 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-notification-agent" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.390987 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-notification-agent" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.391006 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="proxy-httpd" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.391013 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="proxy-httpd" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.391288 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="proxy-httpd" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.391318 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="sg-core" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.391336 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-notification-agent" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.391353 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" containerName="ceilometer-central-agent" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.395429 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.399260 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.402544 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.409253 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.433108 4721 scope.go:117] "RemoveContainer" containerID="b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.453320 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-scripts\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.453569 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.453826 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-run-httpd\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.453949 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-log-httpd\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.454022 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.454058 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-config-data\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.454233 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf6k9\" (UniqueName: \"kubernetes.io/projected/571d8c2d-fc94-4db4-ad3c-1e6825b20035-kube-api-access-hf6k9\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.476639 4721 scope.go:117] "RemoveContainer" containerID="f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.520242 4721 scope.go:117] "RemoveContainer" containerID="037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.520923 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3\": container with ID starting with 037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3 not found: ID does not exist" containerID="037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.520971 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3"} err="failed to get container status \"037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3\": rpc error: code = NotFound desc = could not find container \"037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3\": container with ID starting with 037b5cbe0690165c05d05903478b57886af8c03439a3c5afaa03370a27e2c7b3 not found: ID does not exist" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.521004 4721 scope.go:117] "RemoveContainer" containerID="9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.521599 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1\": container with ID starting with 9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1 not found: ID does not exist" containerID="9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.521657 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1"} err="failed to get container status \"9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1\": rpc error: code = NotFound desc = could not find container \"9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1\": container with ID starting with 9956ea493de3ce82097b98005fe8757edb8ee71f2d62b7f41ea5e4089f77cfb1 not found: ID does not exist" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.521687 4721 scope.go:117] "RemoveContainer" containerID="b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.523092 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f\": container with ID starting with b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f not found: ID does not exist" containerID="b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.523122 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f"} err="failed to get container status \"b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f\": rpc error: code = NotFound desc = could not find container \"b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f\": container with ID starting with b0194738761963470ff9fa51ec111f82fdfec50b9012e36be2c4d645192b8b4f not found: ID does not exist" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.523141 4721 scope.go:117] "RemoveContainer" containerID="f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e" Jan 28 18:59:02 crc kubenswrapper[4721]: E0128 18:59:02.523537 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e\": container with ID starting with f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e not found: ID does not exist" containerID="f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.523567 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e"} err="failed to get container status \"f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e\": rpc error: code = NotFound desc = could not find container \"f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e\": container with ID starting with f3b0ab3659ed247871aabb2df13241e794a5c4a42df0dd45504a3775be02575e not found: ID does not exist" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556254 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-run-httpd\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556394 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-log-httpd\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556487 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556526 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-config-data\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556602 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf6k9\" (UniqueName: \"kubernetes.io/projected/571d8c2d-fc94-4db4-ad3c-1e6825b20035-kube-api-access-hf6k9\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556689 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-scripts\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556738 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556889 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-log-httpd\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.556772 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-run-httpd\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.561765 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-config-data\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.562878 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-scripts\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.563012 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.564050 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.577208 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf6k9\" (UniqueName: \"kubernetes.io/projected/571d8c2d-fc94-4db4-ad3c-1e6825b20035-kube-api-access-hf6k9\") pod \"ceilometer-0\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " pod="openstack/ceilometer-0" Jan 28 18:59:02 crc kubenswrapper[4721]: I0128 18:59:02.722846 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:03 crc kubenswrapper[4721]: W0128 18:59:03.266284 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod571d8c2d_fc94_4db4_ad3c_1e6825b20035.slice/crio-4c4ed9ce4ba3be707ff653f471c10b14978f2c27db347a6adf8a946fe10637c8 WatchSource:0}: Error finding container 4c4ed9ce4ba3be707ff653f471c10b14978f2c27db347a6adf8a946fe10637c8: Status 404 returned error can't find the container with id 4c4ed9ce4ba3be707ff653f471c10b14978f2c27db347a6adf8a946fe10637c8 Jan 28 18:59:03 crc kubenswrapper[4721]: I0128 18:59:03.269206 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:03 crc kubenswrapper[4721]: I0128 18:59:03.302084 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerStarted","Data":"4c4ed9ce4ba3be707ff653f471c10b14978f2c27db347a6adf8a946fe10637c8"} Jan 28 18:59:03 crc kubenswrapper[4721]: I0128 18:59:03.448395 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:59:03 crc kubenswrapper[4721]: I0128 18:59:03.455045 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:59:03 crc kubenswrapper[4721]: I0128 18:59:03.455859 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:59:03 crc kubenswrapper[4721]: I0128 18:59:03.545951 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f529a56-ccec-4eed-9a56-094d3ada74a3" path="/var/lib/kubelet/pods/4f529a56-ccec-4eed-9a56-094d3ada74a3/volumes" Jan 28 18:59:04 crc kubenswrapper[4721]: I0128 18:59:04.320284 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerStarted","Data":"a860f33c0d97db8faa523aa11b2441a795110c2382636af24257765872ada1b6"} Jan 28 18:59:04 crc kubenswrapper[4721]: I0128 18:59:04.326053 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:59:04 crc kubenswrapper[4721]: I0128 18:59:04.575818 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:59:04 crc kubenswrapper[4721]: I0128 18:59:04.607991 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.147618 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.333899 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerStarted","Data":"8aa0fc17661497244544d2a42cd7a82f8d5aaf8301e286d643e488ceec037f04"} Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.336423 4721 generic.go:334] "Generic (PLEG): container finished" podID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerID="8e53db6ea2df45f6cab856eda77131f56223fcc51d3af29e436575c4cdf567ba" exitCode=0 Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.337844 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3607b401-8924-423a-af9f-4d76cbb67a0b","Type":"ContainerDied","Data":"8e53db6ea2df45f6cab856eda77131f56223fcc51d3af29e436575c4cdf567ba"} Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.365884 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.597488 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.599265 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ztjx8"] Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.604258 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.608832 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.609005 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.624905 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ztjx8"] Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.710910 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.760913 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-scripts\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.761010 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnr9\" (UniqueName: \"kubernetes.io/projected/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-kube-api-access-8nnr9\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.761340 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-config-data\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.761444 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.864918 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-scripts\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.864976 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnr9\" (UniqueName: \"kubernetes.io/projected/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-kube-api-access-8nnr9\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.865090 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-config-data\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.865134 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.872446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-scripts\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.874257 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.876733 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-config-data\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.881262 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.892886 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnr9\" (UniqueName: \"kubernetes.io/projected/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-kube-api-access-8nnr9\") pod \"nova-cell1-cell-mapping-ztjx8\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.951237 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.974135 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-config-data\") pod \"3607b401-8924-423a-af9f-4d76cbb67a0b\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.974374 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc54k\" (UniqueName: \"kubernetes.io/projected/3607b401-8924-423a-af9f-4d76cbb67a0b-kube-api-access-xc54k\") pod \"3607b401-8924-423a-af9f-4d76cbb67a0b\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.974478 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3607b401-8924-423a-af9f-4d76cbb67a0b-logs\") pod \"3607b401-8924-423a-af9f-4d76cbb67a0b\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.974658 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-combined-ca-bundle\") pod \"3607b401-8924-423a-af9f-4d76cbb67a0b\" (UID: \"3607b401-8924-423a-af9f-4d76cbb67a0b\") " Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.977013 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3607b401-8924-423a-af9f-4d76cbb67a0b-logs" (OuterVolumeSpecName: "logs") pod "3607b401-8924-423a-af9f-4d76cbb67a0b" (UID: "3607b401-8924-423a-af9f-4d76cbb67a0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:05 crc kubenswrapper[4721]: I0128 18:59:05.982801 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3607b401-8924-423a-af9f-4d76cbb67a0b-kube-api-access-xc54k" (OuterVolumeSpecName: "kube-api-access-xc54k") pod "3607b401-8924-423a-af9f-4d76cbb67a0b" (UID: "3607b401-8924-423a-af9f-4d76cbb67a0b"). InnerVolumeSpecName "kube-api-access-xc54k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.019487 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-config-data" (OuterVolumeSpecName: "config-data") pod "3607b401-8924-423a-af9f-4d76cbb67a0b" (UID: "3607b401-8924-423a-af9f-4d76cbb67a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.022301 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3607b401-8924-423a-af9f-4d76cbb67a0b" (UID: "3607b401-8924-423a-af9f-4d76cbb67a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.079219 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3607b401-8924-423a-af9f-4d76cbb67a0b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.079269 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.079285 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3607b401-8924-423a-af9f-4d76cbb67a0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.079297 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc54k\" (UniqueName: \"kubernetes.io/projected/3607b401-8924-423a-af9f-4d76cbb67a0b-kube-api-access-xc54k\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.348750 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerStarted","Data":"59b5c4496a99332de0c4768a3115fe0f147ad553e05f6454c25bec1a69d59564"} Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.351140 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3607b401-8924-423a-af9f-4d76cbb67a0b","Type":"ContainerDied","Data":"8fc1288dafac6269ebd90032242cdaffadb8f84cf09278daeb0617c76ee74c16"} Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.351208 4721 scope.go:117] "RemoveContainer" containerID="8e53db6ea2df45f6cab856eda77131f56223fcc51d3af29e436575c4cdf567ba" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.351251 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.422742 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.430896 4721 scope.go:117] "RemoveContainer" containerID="2f79f11d8ab6905ded8a2d156a56904835d55c5c66f3db4615248f8d2f5e771f" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.440625 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.466777 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:06 crc kubenswrapper[4721]: E0128 18:59:06.467413 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-api" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.467440 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-api" Jan 28 18:59:06 crc kubenswrapper[4721]: E0128 18:59:06.467487 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-log" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.467496 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-log" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.467758 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-api" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.467795 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" containerName="nova-api-log" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.469605 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.481998 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.482289 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.482442 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.489285 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.504299 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.573262 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ztjx8"] Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.608853 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-config-data\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.608905 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.608926 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-logs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.608966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.609004 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5dd\" (UniqueName: \"kubernetes.io/projected/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-kube-api-access-4s5dd\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.609046 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.711061 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-config-data\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.711104 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.711127 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-logs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.711196 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.711238 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5dd\" (UniqueName: \"kubernetes.io/projected/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-kube-api-access-4s5dd\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.711284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.714548 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-logs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.716855 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-config-data\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.719299 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.720654 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-public-tls-certs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.723823 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.754902 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5dd\" (UniqueName: \"kubernetes.io/projected/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-kube-api-access-4s5dd\") pod \"nova-api-0\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " pod="openstack/nova-api-0" Jan 28 18:59:06 crc kubenswrapper[4721]: I0128 18:59:06.806908 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:07 crc kubenswrapper[4721]: I0128 18:59:07.373261 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ztjx8" event={"ID":"8717a4d7-cca2-4bd2-bb79-6a034cd7081c","Type":"ContainerStarted","Data":"8f1014013f8125055f2dbd76ef01cd7678cacb719231e62e40cec25e622c6bee"} Jan 28 18:59:07 crc kubenswrapper[4721]: I0128 18:59:07.373802 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ztjx8" event={"ID":"8717a4d7-cca2-4bd2-bb79-6a034cd7081c","Type":"ContainerStarted","Data":"eb085710aa462f664acaa61f6efb16d07f1699ca54142cce19ced4f4b758e98f"} Jan 28 18:59:07 crc kubenswrapper[4721]: I0128 18:59:07.400333 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ztjx8" podStartSLOduration=2.40030652 podStartE2EDuration="2.40030652s" podCreationTimestamp="2026-01-28 18:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:59:07.387893306 +0000 UTC m=+1513.113198866" watchObservedRunningTime="2026-01-28 18:59:07.40030652 +0000 UTC m=+1513.125612080" Jan 28 18:59:07 crc kubenswrapper[4721]: I0128 18:59:07.545564 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3607b401-8924-423a-af9f-4d76cbb67a0b" path="/var/lib/kubelet/pods/3607b401-8924-423a-af9f-4d76cbb67a0b/volumes" Jan 28 18:59:07 crc kubenswrapper[4721]: I0128 18:59:07.546386 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.384817 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerStarted","Data":"1411bd0370a3989889662c84de7d37a9a137ef1e680f7c2f7ae701b2c0abf929"} Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.385522 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-central-agent" containerID="cri-o://a860f33c0d97db8faa523aa11b2441a795110c2382636af24257765872ada1b6" gracePeriod=30 Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.385907 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.386365 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="proxy-httpd" containerID="cri-o://1411bd0370a3989889662c84de7d37a9a137ef1e680f7c2f7ae701b2c0abf929" gracePeriod=30 Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.386433 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="sg-core" containerID="cri-o://59b5c4496a99332de0c4768a3115fe0f147ad553e05f6454c25bec1a69d59564" gracePeriod=30 Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.386485 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-notification-agent" containerID="cri-o://8aa0fc17661497244544d2a42cd7a82f8d5aaf8301e286d643e488ceec037f04" gracePeriod=30 Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.397971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b","Type":"ContainerStarted","Data":"54894c1675125ab13abf3662dfd7779a0f3bfe64bf9550dbfb76605d9889f836"} Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.398012 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b","Type":"ContainerStarted","Data":"8d5f95573a1109dfc59692acd9feff871cb97746a7b06e880b89ed15543a248f"} Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.398024 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b","Type":"ContainerStarted","Data":"a0a96c2ca09707b347923fe7ca83813c508cc59588deb1935a56701580d01b15"} Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.414578 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9858203479999998 podStartE2EDuration="6.414556037s" podCreationTimestamp="2026-01-28 18:59:02 +0000 UTC" firstStartedPulling="2026-01-28 18:59:03.269228028 +0000 UTC m=+1508.994533588" lastFinishedPulling="2026-01-28 18:59:07.697963717 +0000 UTC m=+1513.423269277" observedRunningTime="2026-01-28 18:59:08.411642924 +0000 UTC m=+1514.136948484" watchObservedRunningTime="2026-01-28 18:59:08.414556037 +0000 UTC m=+1514.139861597" Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.458784 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.458755993 podStartE2EDuration="2.458755993s" podCreationTimestamp="2026-01-28 18:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:59:08.440940026 +0000 UTC m=+1514.166245596" watchObservedRunningTime="2026-01-28 18:59:08.458755993 +0000 UTC m=+1514.184061553" Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.767473 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.847298 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-9cwzn"] Jan 28 18:59:08 crc kubenswrapper[4721]: I0128 18:59:08.848427 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerName="dnsmasq-dns" containerID="cri-o://a2ac82e74e2ec28298b95675c7d0747ddfe6755e7a6d80ee6c02a96d121876e0" gracePeriod=10 Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.421151 4721 generic.go:334] "Generic (PLEG): container finished" podID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerID="1411bd0370a3989889662c84de7d37a9a137ef1e680f7c2f7ae701b2c0abf929" exitCode=0 Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.422056 4721 generic.go:334] "Generic (PLEG): container finished" podID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerID="59b5c4496a99332de0c4768a3115fe0f147ad553e05f6454c25bec1a69d59564" exitCode=2 Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.422143 4721 generic.go:334] "Generic (PLEG): container finished" podID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerID="8aa0fc17661497244544d2a42cd7a82f8d5aaf8301e286d643e488ceec037f04" exitCode=0 Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.421317 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerDied","Data":"1411bd0370a3989889662c84de7d37a9a137ef1e680f7c2f7ae701b2c0abf929"} Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.422338 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerDied","Data":"59b5c4496a99332de0c4768a3115fe0f147ad553e05f6454c25bec1a69d59564"} Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.422429 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerDied","Data":"8aa0fc17661497244544d2a42cd7a82f8d5aaf8301e286d643e488ceec037f04"} Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.426087 4721 generic.go:334] "Generic (PLEG): container finished" podID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerID="a2ac82e74e2ec28298b95675c7d0747ddfe6755e7a6d80ee6c02a96d121876e0" exitCode=0 Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.426160 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" event={"ID":"49c5ce5d-28b1-4b34-865e-7452b6512fa5","Type":"ContainerDied","Data":"a2ac82e74e2ec28298b95675c7d0747ddfe6755e7a6d80ee6c02a96d121876e0"} Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.426236 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" event={"ID":"49c5ce5d-28b1-4b34-865e-7452b6512fa5","Type":"ContainerDied","Data":"aed78d1fb1eb6bcd5560ec1fd826b3f7bf214fd436d25586df291762554013cf"} Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.426268 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed78d1fb1eb6bcd5560ec1fd826b3f7bf214fd436d25586df291762554013cf" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.513505 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.589843 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-swift-storage-0\") pod \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.589975 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-nb\") pod \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.590047 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8m5h\" (UniqueName: \"kubernetes.io/projected/49c5ce5d-28b1-4b34-865e-7452b6512fa5-kube-api-access-v8m5h\") pod \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.590163 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-svc\") pod \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.590280 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-sb\") pod \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.590491 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-config\") pod \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\" (UID: \"49c5ce5d-28b1-4b34-865e-7452b6512fa5\") " Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.595917 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c5ce5d-28b1-4b34-865e-7452b6512fa5-kube-api-access-v8m5h" (OuterVolumeSpecName: "kube-api-access-v8m5h") pod "49c5ce5d-28b1-4b34-865e-7452b6512fa5" (UID: "49c5ce5d-28b1-4b34-865e-7452b6512fa5"). InnerVolumeSpecName "kube-api-access-v8m5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.664362 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49c5ce5d-28b1-4b34-865e-7452b6512fa5" (UID: "49c5ce5d-28b1-4b34-865e-7452b6512fa5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.670763 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49c5ce5d-28b1-4b34-865e-7452b6512fa5" (UID: "49c5ce5d-28b1-4b34-865e-7452b6512fa5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.677815 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49c5ce5d-28b1-4b34-865e-7452b6512fa5" (UID: "49c5ce5d-28b1-4b34-865e-7452b6512fa5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.697901 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.697943 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8m5h\" (UniqueName: \"kubernetes.io/projected/49c5ce5d-28b1-4b34-865e-7452b6512fa5-kube-api-access-v8m5h\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.697984 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.698001 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.698816 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-config" (OuterVolumeSpecName: "config") pod "49c5ce5d-28b1-4b34-865e-7452b6512fa5" (UID: "49c5ce5d-28b1-4b34-865e-7452b6512fa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.725832 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49c5ce5d-28b1-4b34-865e-7452b6512fa5" (UID: "49c5ce5d-28b1-4b34-865e-7452b6512fa5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.800931 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:09 crc kubenswrapper[4721]: I0128 18:59:09.800968 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49c5ce5d-28b1-4b34-865e-7452b6512fa5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:10 crc kubenswrapper[4721]: I0128 18:59:10.435260 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-9cwzn" Jan 28 18:59:10 crc kubenswrapper[4721]: I0128 18:59:10.471485 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-9cwzn"] Jan 28 18:59:10 crc kubenswrapper[4721]: I0128 18:59:10.487390 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-9cwzn"] Jan 28 18:59:11 crc kubenswrapper[4721]: I0128 18:59:11.543406 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" path="/var/lib/kubelet/pods/49c5ce5d-28b1-4b34-865e-7452b6512fa5/volumes" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.462645 4721 generic.go:334] "Generic (PLEG): container finished" podID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerID="a860f33c0d97db8faa523aa11b2441a795110c2382636af24257765872ada1b6" exitCode=0 Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.462701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerDied","Data":"a860f33c0d97db8faa523aa11b2441a795110c2382636af24257765872ada1b6"} Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.722901 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.874824 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-run-httpd\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.875616 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-combined-ca-bundle\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.875671 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-config-data\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.875721 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf6k9\" (UniqueName: \"kubernetes.io/projected/571d8c2d-fc94-4db4-ad3c-1e6825b20035-kube-api-access-hf6k9\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.875757 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-log-httpd\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.875781 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-sg-core-conf-yaml\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.875818 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-scripts\") pod \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\" (UID: \"571d8c2d-fc94-4db4-ad3c-1e6825b20035\") " Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.877030 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.877288 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.878218 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.878246 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/571d8c2d-fc94-4db4-ad3c-1e6825b20035-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.882730 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-scripts" (OuterVolumeSpecName: "scripts") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.883158 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571d8c2d-fc94-4db4-ad3c-1e6825b20035-kube-api-access-hf6k9" (OuterVolumeSpecName: "kube-api-access-hf6k9") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "kube-api-access-hf6k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.910017 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.980225 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf6k9\" (UniqueName: \"kubernetes.io/projected/571d8c2d-fc94-4db4-ad3c-1e6825b20035-kube-api-access-hf6k9\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.980259 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.980268 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:12 crc kubenswrapper[4721]: I0128 18:59:12.980475 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.002418 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-config-data" (OuterVolumeSpecName: "config-data") pod "571d8c2d-fc94-4db4-ad3c-1e6825b20035" (UID: "571d8c2d-fc94-4db4-ad3c-1e6825b20035"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.081965 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.081998 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571d8c2d-fc94-4db4-ad3c-1e6825b20035-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.486802 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"571d8c2d-fc94-4db4-ad3c-1e6825b20035","Type":"ContainerDied","Data":"4c4ed9ce4ba3be707ff653f471c10b14978f2c27db347a6adf8a946fe10637c8"} Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.486896 4721 scope.go:117] "RemoveContainer" containerID="1411bd0370a3989889662c84de7d37a9a137ef1e680f7c2f7ae701b2c0abf929" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.487186 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.522445 4721 scope.go:117] "RemoveContainer" containerID="59b5c4496a99332de0c4768a3115fe0f147ad553e05f6454c25bec1a69d59564" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.553164 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.553236 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.565887 4721 scope.go:117] "RemoveContainer" containerID="8aa0fc17661497244544d2a42cd7a82f8d5aaf8301e286d643e488ceec037f04" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.569761 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:13 crc kubenswrapper[4721]: E0128 18:59:13.570424 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerName="dnsmasq-dns" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570449 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerName="dnsmasq-dns" Jan 28 18:59:13 crc kubenswrapper[4721]: E0128 18:59:13.570470 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="sg-core" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570479 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="sg-core" Jan 28 18:59:13 crc kubenswrapper[4721]: E0128 18:59:13.570517 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerName="init" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570530 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerName="init" Jan 28 18:59:13 crc kubenswrapper[4721]: E0128 18:59:13.570543 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-notification-agent" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570551 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-notification-agent" Jan 28 18:59:13 crc kubenswrapper[4721]: E0128 18:59:13.570562 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-central-agent" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570570 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-central-agent" Jan 28 18:59:13 crc kubenswrapper[4721]: E0128 18:59:13.570585 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="proxy-httpd" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570592 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="proxy-httpd" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570888 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="sg-core" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570907 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c5ce5d-28b1-4b34-865e-7452b6512fa5" containerName="dnsmasq-dns" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570926 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-central-agent" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570939 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="ceilometer-notification-agent" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.570948 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" containerName="proxy-httpd" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.574331 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.582818 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.583632 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.583906 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.614576 4721 scope.go:117] "RemoveContainer" containerID="a860f33c0d97db8faa523aa11b2441a795110c2382636af24257765872ada1b6" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700549 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-log-httpd\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700635 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-config-data\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700664 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srrq\" (UniqueName: \"kubernetes.io/projected/7a85c957-db30-4931-adbd-be40eec18aa0-kube-api-access-4srrq\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700776 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-scripts\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700830 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.700903 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-run-httpd\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803396 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-log-httpd\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803483 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-config-data\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803522 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803607 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4srrq\" (UniqueName: \"kubernetes.io/projected/7a85c957-db30-4931-adbd-be40eec18aa0-kube-api-access-4srrq\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803645 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-scripts\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803692 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.803748 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-run-httpd\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.804548 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-run-httpd\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.804615 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-log-httpd\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.810066 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.810722 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.811087 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-config-data\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.811324 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-scripts\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.823217 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4srrq\" (UniqueName: \"kubernetes.io/projected/7a85c957-db30-4931-adbd-be40eec18aa0-kube-api-access-4srrq\") pod \"ceilometer-0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " pod="openstack/ceilometer-0" Jan 28 18:59:13 crc kubenswrapper[4721]: I0128 18:59:13.904506 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:14 crc kubenswrapper[4721]: I0128 18:59:14.436312 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:14 crc kubenswrapper[4721]: W0128 18:59:14.436417 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a85c957_db30_4931_adbd_be40eec18aa0.slice/crio-bc44da487f4b09935da09c64685cd9930aba59b0d3f7dce6eb6467bbb41d5166 WatchSource:0}: Error finding container bc44da487f4b09935da09c64685cd9930aba59b0d3f7dce6eb6467bbb41d5166: Status 404 returned error can't find the container with id bc44da487f4b09935da09c64685cd9930aba59b0d3f7dce6eb6467bbb41d5166 Jan 28 18:59:14 crc kubenswrapper[4721]: I0128 18:59:14.508563 4721 generic.go:334] "Generic (PLEG): container finished" podID="8717a4d7-cca2-4bd2-bb79-6a034cd7081c" containerID="8f1014013f8125055f2dbd76ef01cd7678cacb719231e62e40cec25e622c6bee" exitCode=0 Jan 28 18:59:14 crc kubenswrapper[4721]: I0128 18:59:14.508710 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ztjx8" event={"ID":"8717a4d7-cca2-4bd2-bb79-6a034cd7081c","Type":"ContainerDied","Data":"8f1014013f8125055f2dbd76ef01cd7678cacb719231e62e40cec25e622c6bee"} Jan 28 18:59:14 crc kubenswrapper[4721]: I0128 18:59:14.512892 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerStarted","Data":"bc44da487f4b09935da09c64685cd9930aba59b0d3f7dce6eb6467bbb41d5166"} Jan 28 18:59:15 crc kubenswrapper[4721]: I0128 18:59:15.564306 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="571d8c2d-fc94-4db4-ad3c-1e6825b20035" path="/var/lib/kubelet/pods/571d8c2d-fc94-4db4-ad3c-1e6825b20035/volumes" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.231159 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.276950 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-combined-ca-bundle\") pod \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.277205 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nnr9\" (UniqueName: \"kubernetes.io/projected/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-kube-api-access-8nnr9\") pod \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.277240 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-scripts\") pod \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.277428 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-config-data\") pod \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\" (UID: \"8717a4d7-cca2-4bd2-bb79-6a034cd7081c\") " Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.286032 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-kube-api-access-8nnr9" (OuterVolumeSpecName: "kube-api-access-8nnr9") pod "8717a4d7-cca2-4bd2-bb79-6a034cd7081c" (UID: "8717a4d7-cca2-4bd2-bb79-6a034cd7081c"). InnerVolumeSpecName "kube-api-access-8nnr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.290600 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-scripts" (OuterVolumeSpecName: "scripts") pod "8717a4d7-cca2-4bd2-bb79-6a034cd7081c" (UID: "8717a4d7-cca2-4bd2-bb79-6a034cd7081c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.328401 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-config-data" (OuterVolumeSpecName: "config-data") pod "8717a4d7-cca2-4bd2-bb79-6a034cd7081c" (UID: "8717a4d7-cca2-4bd2-bb79-6a034cd7081c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.339585 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8717a4d7-cca2-4bd2-bb79-6a034cd7081c" (UID: "8717a4d7-cca2-4bd2-bb79-6a034cd7081c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.380058 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.380103 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.380122 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nnr9\" (UniqueName: \"kubernetes.io/projected/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-kube-api-access-8nnr9\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.380133 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8717a4d7-cca2-4bd2-bb79-6a034cd7081c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.529053 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 18:59:16 crc kubenswrapper[4721]: E0128 18:59:16.529898 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.549234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ztjx8" event={"ID":"8717a4d7-cca2-4bd2-bb79-6a034cd7081c","Type":"ContainerDied","Data":"eb085710aa462f664acaa61f6efb16d07f1699ca54142cce19ced4f4b758e98f"} Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.549289 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb085710aa462f664acaa61f6efb16d07f1699ca54142cce19ced4f4b758e98f" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.549305 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ztjx8" Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.558574 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerStarted","Data":"5721e0a2d55f23956363dee5913b3901b40197fca0335437f04e35db6c990e1d"} Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.558633 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerStarted","Data":"8ed6531b022df69a0868311fdc6c3c71674d486bc6e7c3d077366e5cfe2ee3ad"} Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.736383 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.736735 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-log" containerID="cri-o://8d5f95573a1109dfc59692acd9feff871cb97746a7b06e880b89ed15543a248f" gracePeriod=30 Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.736821 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-api" containerID="cri-o://54894c1675125ab13abf3662dfd7779a0f3bfe64bf9550dbfb76605d9889f836" gracePeriod=30 Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.763885 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.764402 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="22941a54-ca5a-4905-8d65-c8724f519090" containerName="nova-scheduler-scheduler" containerID="cri-o://6b9b7fa93e87409f1e12b346ccc49c5d577d1ea9b12e593afeafba1a95e005b7" gracePeriod=30 Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.786377 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.786694 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-log" containerID="cri-o://95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311" gracePeriod=30 Jan 28 18:59:16 crc kubenswrapper[4721]: I0128 18:59:16.786893 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-metadata" containerID="cri-o://84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9" gracePeriod=30 Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.579915 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerStarted","Data":"bde69917b32e2f7dbd397efba96edf70037e35380468885551dd55e5ca1a1b1a"} Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.587914 4721 generic.go:334] "Generic (PLEG): container finished" podID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerID="54894c1675125ab13abf3662dfd7779a0f3bfe64bf9550dbfb76605d9889f836" exitCode=0 Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.587959 4721 generic.go:334] "Generic (PLEG): container finished" podID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerID="8d5f95573a1109dfc59692acd9feff871cb97746a7b06e880b89ed15543a248f" exitCode=143 Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.588053 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b","Type":"ContainerDied","Data":"54894c1675125ab13abf3662dfd7779a0f3bfe64bf9550dbfb76605d9889f836"} Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.588092 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b","Type":"ContainerDied","Data":"8d5f95573a1109dfc59692acd9feff871cb97746a7b06e880b89ed15543a248f"} Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.596562 4721 generic.go:334] "Generic (PLEG): container finished" podID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerID="95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311" exitCode=143 Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.596626 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af0e32a0-15f5-49b3-adca-4e9b1040f218","Type":"ContainerDied","Data":"95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311"} Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.697553 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.820783 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-logs\") pod \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.820951 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5dd\" (UniqueName: \"kubernetes.io/projected/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-kube-api-access-4s5dd\") pod \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.821001 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-public-tls-certs\") pod \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.821302 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-logs" (OuterVolumeSpecName: "logs") pod "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" (UID: "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.822105 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-internal-tls-certs\") pod \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.822270 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-config-data\") pod \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.822301 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-combined-ca-bundle\") pod \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\" (UID: \"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b\") " Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.822980 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.832370 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-kube-api-access-4s5dd" (OuterVolumeSpecName: "kube-api-access-4s5dd") pod "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" (UID: "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b"). InnerVolumeSpecName "kube-api-access-4s5dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.862903 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" (UID: "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.870709 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-config-data" (OuterVolumeSpecName: "config-data") pod "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" (UID: "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.889394 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" (UID: "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.896275 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" (UID: "aa2690f8-a84d-4f5e-96ee-9bb18524dd6b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.925275 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.925322 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.925340 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5dd\" (UniqueName: \"kubernetes.io/projected/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-kube-api-access-4s5dd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.925352 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:17 crc kubenswrapper[4721]: I0128 18:59:17.925363 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.632966 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aa2690f8-a84d-4f5e-96ee-9bb18524dd6b","Type":"ContainerDied","Data":"a0a96c2ca09707b347923fe7ca83813c508cc59588deb1935a56701580d01b15"} Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.633393 4721 scope.go:117] "RemoveContainer" containerID="54894c1675125ab13abf3662dfd7779a0f3bfe64bf9550dbfb76605d9889f836" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.633634 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.638520 4721 generic.go:334] "Generic (PLEG): container finished" podID="22941a54-ca5a-4905-8d65-c8724f519090" containerID="6b9b7fa93e87409f1e12b346ccc49c5d577d1ea9b12e593afeafba1a95e005b7" exitCode=0 Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.638562 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22941a54-ca5a-4905-8d65-c8724f519090","Type":"ContainerDied","Data":"6b9b7fa93e87409f1e12b346ccc49c5d577d1ea9b12e593afeafba1a95e005b7"} Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.664739 4721 scope.go:117] "RemoveContainer" containerID="8d5f95573a1109dfc59692acd9feff871cb97746a7b06e880b89ed15543a248f" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.753150 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.794347 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.816145 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:18 crc kubenswrapper[4721]: E0128 18:59:18.816740 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-api" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.816766 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-api" Jan 28 18:59:18 crc kubenswrapper[4721]: E0128 18:59:18.816802 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-log" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.816811 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-log" Jan 28 18:59:18 crc kubenswrapper[4721]: E0128 18:59:18.816836 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8717a4d7-cca2-4bd2-bb79-6a034cd7081c" containerName="nova-manage" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.816845 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8717a4d7-cca2-4bd2-bb79-6a034cd7081c" containerName="nova-manage" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.817118 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-api" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.817160 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" containerName="nova-api-log" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.817193 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8717a4d7-cca2-4bd2-bb79-6a034cd7081c" containerName="nova-manage" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.818771 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.826738 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.826738 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.826784 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.833699 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.954149 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-public-tls-certs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.954389 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4898ad56-ee48-4c94-846a-cb0c2af32da7-logs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.954474 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.954729 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7k5j\" (UniqueName: \"kubernetes.io/projected/4898ad56-ee48-4c94-846a-cb0c2af32da7-kube-api-access-p7k5j\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.955229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.955577 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-config-data\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:18 crc kubenswrapper[4721]: I0128 18:59:18.961663 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.058888 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-combined-ca-bundle\") pod \"22941a54-ca5a-4905-8d65-c8724f519090\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.059128 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7wxk\" (UniqueName: \"kubernetes.io/projected/22941a54-ca5a-4905-8d65-c8724f519090-kube-api-access-b7wxk\") pod \"22941a54-ca5a-4905-8d65-c8724f519090\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.059207 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-config-data\") pod \"22941a54-ca5a-4905-8d65-c8724f519090\" (UID: \"22941a54-ca5a-4905-8d65-c8724f519090\") " Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.059760 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7k5j\" (UniqueName: \"kubernetes.io/projected/4898ad56-ee48-4c94-846a-cb0c2af32da7-kube-api-access-p7k5j\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.059852 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.059905 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-config-data\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.059972 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-public-tls-certs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.060012 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4898ad56-ee48-4c94-846a-cb0c2af32da7-logs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.060040 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.068900 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4898ad56-ee48-4c94-846a-cb0c2af32da7-logs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.076810 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-config-data\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.077890 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.083894 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.102906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4898ad56-ee48-4c94-846a-cb0c2af32da7-public-tls-certs\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.103355 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22941a54-ca5a-4905-8d65-c8724f519090-kube-api-access-b7wxk" (OuterVolumeSpecName: "kube-api-access-b7wxk") pod "22941a54-ca5a-4905-8d65-c8724f519090" (UID: "22941a54-ca5a-4905-8d65-c8724f519090"). InnerVolumeSpecName "kube-api-access-b7wxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.114362 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7k5j\" (UniqueName: \"kubernetes.io/projected/4898ad56-ee48-4c94-846a-cb0c2af32da7-kube-api-access-p7k5j\") pod \"nova-api-0\" (UID: \"4898ad56-ee48-4c94-846a-cb0c2af32da7\") " pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.161852 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7wxk\" (UniqueName: \"kubernetes.io/projected/22941a54-ca5a-4905-8d65-c8724f519090-kube-api-access-b7wxk\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.170333 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22941a54-ca5a-4905-8d65-c8724f519090" (UID: "22941a54-ca5a-4905-8d65-c8724f519090"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.271315 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.271543 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-config-data" (OuterVolumeSpecName: "config-data") pod "22941a54-ca5a-4905-8d65-c8724f519090" (UID: "22941a54-ca5a-4905-8d65-c8724f519090"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.290046 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.382442 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22941a54-ca5a-4905-8d65-c8724f519090-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.549663 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2690f8-a84d-4f5e-96ee-9bb18524dd6b" path="/var/lib/kubelet/pods/aa2690f8-a84d-4f5e-96ee-9bb18524dd6b/volumes" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.665074 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22941a54-ca5a-4905-8d65-c8724f519090","Type":"ContainerDied","Data":"63f6a475979d58f8ccd5e7a837e42b6abe64a953339c798e7c8a5c2305337303"} Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.665130 4721 scope.go:117] "RemoveContainer" containerID="6b9b7fa93e87409f1e12b346ccc49c5d577d1ea9b12e593afeafba1a95e005b7" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.665293 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.670207 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerStarted","Data":"9a7b48d3cd85e5e87b98ece688184911e9a093ed2d847030885791674719a4b5"} Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.670513 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.697287 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.379961832 podStartE2EDuration="6.697264167s" podCreationTimestamp="2026-01-28 18:59:13 +0000 UTC" firstStartedPulling="2026-01-28 18:59:14.439285895 +0000 UTC m=+1520.164591455" lastFinishedPulling="2026-01-28 18:59:18.75658823 +0000 UTC m=+1524.481893790" observedRunningTime="2026-01-28 18:59:19.693299051 +0000 UTC m=+1525.418604621" watchObservedRunningTime="2026-01-28 18:59:19.697264167 +0000 UTC m=+1525.422569727" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.739859 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.754467 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.766131 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:59:19 crc kubenswrapper[4721]: E0128 18:59:19.766729 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22941a54-ca5a-4905-8d65-c8724f519090" containerName="nova-scheduler-scheduler" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.766747 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="22941a54-ca5a-4905-8d65-c8724f519090" containerName="nova-scheduler-scheduler" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.766943 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="22941a54-ca5a-4905-8d65-c8724f519090" containerName="nova-scheduler-scheduler" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.767996 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.771560 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.776185 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.799470 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.799727 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5xb\" (UniqueName: \"kubernetes.io/projected/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-kube-api-access-md5xb\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.800998 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-config-data\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: W0128 18:59:19.890089 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4898ad56_ee48_4c94_846a_cb0c2af32da7.slice/crio-fac348f11b9c6cdcb27876798eda786e28c56a9a87ff6155199c003b4154dc14 WatchSource:0}: Error finding container fac348f11b9c6cdcb27876798eda786e28c56a9a87ff6155199c003b4154dc14: Status 404 returned error can't find the container with id fac348f11b9c6cdcb27876798eda786e28c56a9a87ff6155199c003b4154dc14 Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.896067 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.903624 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.903770 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5xb\" (UniqueName: \"kubernetes.io/projected/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-kube-api-access-md5xb\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.903870 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-config-data\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.909955 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-config-data\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.910372 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.924627 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5xb\" (UniqueName: \"kubernetes.io/projected/ac328e3e-730d-4617-bf12-8ad6a4c5e9bf-kube-api-access-md5xb\") pod \"nova-scheduler-0\" (UID: \"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf\") " pod="openstack/nova-scheduler-0" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.927660 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": read tcp 10.217.0.2:35580->10.217.0.226:8775: read: connection reset by peer" Jan 28 18:59:19 crc kubenswrapper[4721]: I0128 18:59:19.927839 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": read tcp 10.217.0.2:35594->10.217.0.226:8775: read: connection reset by peer" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.094751 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.459914 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.519845 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfs7t\" (UniqueName: \"kubernetes.io/projected/af0e32a0-15f5-49b3-adca-4e9b1040f218-kube-api-access-hfs7t\") pod \"af0e32a0-15f5-49b3-adca-4e9b1040f218\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.519957 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-config-data\") pod \"af0e32a0-15f5-49b3-adca-4e9b1040f218\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.520053 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-nova-metadata-tls-certs\") pod \"af0e32a0-15f5-49b3-adca-4e9b1040f218\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.520239 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af0e32a0-15f5-49b3-adca-4e9b1040f218-logs\") pod \"af0e32a0-15f5-49b3-adca-4e9b1040f218\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.520359 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-combined-ca-bundle\") pod \"af0e32a0-15f5-49b3-adca-4e9b1040f218\" (UID: \"af0e32a0-15f5-49b3-adca-4e9b1040f218\") " Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.521163 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0e32a0-15f5-49b3-adca-4e9b1040f218-logs" (OuterVolumeSpecName: "logs") pod "af0e32a0-15f5-49b3-adca-4e9b1040f218" (UID: "af0e32a0-15f5-49b3-adca-4e9b1040f218"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.539381 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0e32a0-15f5-49b3-adca-4e9b1040f218-kube-api-access-hfs7t" (OuterVolumeSpecName: "kube-api-access-hfs7t") pod "af0e32a0-15f5-49b3-adca-4e9b1040f218" (UID: "af0e32a0-15f5-49b3-adca-4e9b1040f218"). InnerVolumeSpecName "kube-api-access-hfs7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.624904 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af0e32a0-15f5-49b3-adca-4e9b1040f218-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.624940 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfs7t\" (UniqueName: \"kubernetes.io/projected/af0e32a0-15f5-49b3-adca-4e9b1040f218-kube-api-access-hfs7t\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.679751 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-config-data" (OuterVolumeSpecName: "config-data") pod "af0e32a0-15f5-49b3-adca-4e9b1040f218" (UID: "af0e32a0-15f5-49b3-adca-4e9b1040f218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.684351 4721 generic.go:334] "Generic (PLEG): container finished" podID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerID="84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9" exitCode=0 Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.684803 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.684891 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af0e32a0-15f5-49b3-adca-4e9b1040f218","Type":"ContainerDied","Data":"84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9"} Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.685063 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af0e32a0-15f5-49b3-adca-4e9b1040f218","Type":"ContainerDied","Data":"7faf36e71d6dc3b7c2ee3e897b9ae0c255195bc9dbc1135c579969cb5d7069e4"} Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.685115 4721 scope.go:117] "RemoveContainer" containerID="84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.687546 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4898ad56-ee48-4c94-846a-cb0c2af32da7","Type":"ContainerStarted","Data":"3f3738275a849255c5963692699c862866787bd9226ac340251b8beb3b91d9c2"} Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.687577 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4898ad56-ee48-4c94-846a-cb0c2af32da7","Type":"ContainerStarted","Data":"fac348f11b9c6cdcb27876798eda786e28c56a9a87ff6155199c003b4154dc14"} Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.697591 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af0e32a0-15f5-49b3-adca-4e9b1040f218" (UID: "af0e32a0-15f5-49b3-adca-4e9b1040f218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.726747 4721 scope.go:117] "RemoveContainer" containerID="95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.727672 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.727705 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.730852 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "af0e32a0-15f5-49b3-adca-4e9b1040f218" (UID: "af0e32a0-15f5-49b3-adca-4e9b1040f218"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.782698 4721 scope.go:117] "RemoveContainer" containerID="84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9" Jan 28 18:59:20 crc kubenswrapper[4721]: E0128 18:59:20.787554 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9\": container with ID starting with 84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9 not found: ID does not exist" containerID="84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.787602 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9"} err="failed to get container status \"84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9\": rpc error: code = NotFound desc = could not find container \"84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9\": container with ID starting with 84d46bcdeec49a6e5ed23aba8bb7e988591d8c03b73ef1a0ad6574780941ffe9 not found: ID does not exist" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.787636 4721 scope.go:117] "RemoveContainer" containerID="95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311" Jan 28 18:59:20 crc kubenswrapper[4721]: E0128 18:59:20.788764 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311\": container with ID starting with 95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311 not found: ID does not exist" containerID="95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.788787 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311"} err="failed to get container status \"95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311\": rpc error: code = NotFound desc = could not find container \"95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311\": container with ID starting with 95b1abfecbbc5ed6e01db6c24570526cec5524c8a722fea7d93a9058823bc311 not found: ID does not exist" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.831123 4721 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af0e32a0-15f5-49b3-adca-4e9b1040f218-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:20 crc kubenswrapper[4721]: I0128 18:59:20.850753 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:59:20 crc kubenswrapper[4721]: W0128 18:59:20.856377 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac328e3e_730d_4617_bf12_8ad6a4c5e9bf.slice/crio-558d7b42925a246951565d4338939701964e82eaca3265c2722928832aa0b31a WatchSource:0}: Error finding container 558d7b42925a246951565d4338939701964e82eaca3265c2722928832aa0b31a: Status 404 returned error can't find the container with id 558d7b42925a246951565d4338939701964e82eaca3265c2722928832aa0b31a Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.033477 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.054145 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.080049 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:59:21 crc kubenswrapper[4721]: E0128 18:59:21.081409 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-log" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.081487 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-log" Jan 28 18:59:21 crc kubenswrapper[4721]: E0128 18:59:21.081581 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-metadata" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.081635 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-metadata" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.082260 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-metadata" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.082352 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" containerName="nova-metadata-log" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.084456 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.094465 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.094604 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.101830 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.162450 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.163144 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-config-data\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.167621 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5877169-6d6b-4a83-a58d-b885ede23ffb-logs\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.167918 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhws\" (UniqueName: \"kubernetes.io/projected/f5877169-6d6b-4a83-a58d-b885ede23ffb-kube-api-access-8zhws\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.168083 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.270403 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5877169-6d6b-4a83-a58d-b885ede23ffb-logs\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.270539 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhws\" (UniqueName: \"kubernetes.io/projected/f5877169-6d6b-4a83-a58d-b885ede23ffb-kube-api-access-8zhws\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.270620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.270687 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.270778 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-config-data\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.271701 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5877169-6d6b-4a83-a58d-b885ede23ffb-logs\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.275538 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.278756 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.291497 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5877169-6d6b-4a83-a58d-b885ede23ffb-config-data\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.291646 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhws\" (UniqueName: \"kubernetes.io/projected/f5877169-6d6b-4a83-a58d-b885ede23ffb-kube-api-access-8zhws\") pod \"nova-metadata-0\" (UID: \"f5877169-6d6b-4a83-a58d-b885ede23ffb\") " pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.502282 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.554374 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22941a54-ca5a-4905-8d65-c8724f519090" path="/var/lib/kubelet/pods/22941a54-ca5a-4905-8d65-c8724f519090/volumes" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.555265 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0e32a0-15f5-49b3-adca-4e9b1040f218" path="/var/lib/kubelet/pods/af0e32a0-15f5-49b3-adca-4e9b1040f218/volumes" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.748122 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4898ad56-ee48-4c94-846a-cb0c2af32da7","Type":"ContainerStarted","Data":"4b6249ed5bacbe922d4f645915fa91d42abd43ccb58d2ca2d6770e4624f8d12b"} Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.760497 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf","Type":"ContainerStarted","Data":"f5eae69145eec4ead88ed9eb8570a33786e204913c9f1198ec3588f7964004e2"} Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.760554 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac328e3e-730d-4617-bf12-8ad6a4c5e9bf","Type":"ContainerStarted","Data":"558d7b42925a246951565d4338939701964e82eaca3265c2722928832aa0b31a"} Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.785925 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.785904533 podStartE2EDuration="3.785904533s" podCreationTimestamp="2026-01-28 18:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:59:21.777279789 +0000 UTC m=+1527.502585349" watchObservedRunningTime="2026-01-28 18:59:21.785904533 +0000 UTC m=+1527.511210083" Jan 28 18:59:21 crc kubenswrapper[4721]: I0128 18:59:21.800845 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.800819887 podStartE2EDuration="2.800819887s" podCreationTimestamp="2026-01-28 18:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:59:21.7958916 +0000 UTC m=+1527.521197160" watchObservedRunningTime="2026-01-28 18:59:21.800819887 +0000 UTC m=+1527.526125447" Jan 28 18:59:22 crc kubenswrapper[4721]: W0128 18:59:22.024017 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5877169_6d6b_4a83_a58d_b885ede23ffb.slice/crio-e6499931f44ca1965fbf2d9ab08e1c24dd9ba61fbd00f55a570a23d354b995ef WatchSource:0}: Error finding container e6499931f44ca1965fbf2d9ab08e1c24dd9ba61fbd00f55a570a23d354b995ef: Status 404 returned error can't find the container with id e6499931f44ca1965fbf2d9ab08e1c24dd9ba61fbd00f55a570a23d354b995ef Jan 28 18:59:22 crc kubenswrapper[4721]: I0128 18:59:22.024579 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:59:22 crc kubenswrapper[4721]: I0128 18:59:22.775727 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5877169-6d6b-4a83-a58d-b885ede23ffb","Type":"ContainerStarted","Data":"8be1b62bbab29797150c4f52c3c336664b68e071f6d1d5a101736e1c70a2333d"} Jan 28 18:59:22 crc kubenswrapper[4721]: I0128 18:59:22.775998 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5877169-6d6b-4a83-a58d-b885ede23ffb","Type":"ContainerStarted","Data":"31e777b21a011608f7567ffc7bb036ddd621c3a23629a95b7e1df391d19b7f68"} Jan 28 18:59:22 crc kubenswrapper[4721]: I0128 18:59:22.776008 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5877169-6d6b-4a83-a58d-b885ede23ffb","Type":"ContainerStarted","Data":"e6499931f44ca1965fbf2d9ab08e1c24dd9ba61fbd00f55a570a23d354b995ef"} Jan 28 18:59:22 crc kubenswrapper[4721]: I0128 18:59:22.814131 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.814106233 podStartE2EDuration="1.814106233s" podCreationTimestamp="2026-01-28 18:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:59:22.794868801 +0000 UTC m=+1528.520174361" watchObservedRunningTime="2026-01-28 18:59:22.814106233 +0000 UTC m=+1528.539411793" Jan 28 18:59:25 crc kubenswrapper[4721]: I0128 18:59:25.095350 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:59:26 crc kubenswrapper[4721]: I0128 18:59:26.504386 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:59:26 crc kubenswrapper[4721]: I0128 18:59:26.504736 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:59:27 crc kubenswrapper[4721]: I0128 18:59:27.528963 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 18:59:27 crc kubenswrapper[4721]: E0128 18:59:27.529461 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 18:59:29 crc kubenswrapper[4721]: I0128 18:59:29.291590 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:59:29 crc kubenswrapper[4721]: I0128 18:59:29.291645 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:59:30 crc kubenswrapper[4721]: I0128 18:59:30.095634 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:59:30 crc kubenswrapper[4721]: I0128 18:59:30.137940 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:59:30 crc kubenswrapper[4721]: I0128 18:59:30.306398 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4898ad56-ee48-4c94-846a-cb0c2af32da7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.234:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:59:30 crc kubenswrapper[4721]: I0128 18:59:30.306391 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4898ad56-ee48-4c94-846a-cb0c2af32da7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.234:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:59:30 crc kubenswrapper[4721]: I0128 18:59:30.900555 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:59:31 crc kubenswrapper[4721]: I0128 18:59:31.503641 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:59:31 crc kubenswrapper[4721]: I0128 18:59:31.503682 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:59:32 crc kubenswrapper[4721]: I0128 18:59:32.546460 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f5877169-6d6b-4a83-a58d-b885ede23ffb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:59:32 crc kubenswrapper[4721]: I0128 18:59:32.546476 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f5877169-6d6b-4a83-a58d-b885ede23ffb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.298827 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.299772 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.301393 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.301489 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.314202 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.314280 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:59:39 crc kubenswrapper[4721]: I0128 18:59:39.538987 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 18:59:39 crc kubenswrapper[4721]: E0128 18:59:39.540018 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 18:59:41 crc kubenswrapper[4721]: I0128 18:59:41.510665 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:59:41 crc kubenswrapper[4721]: I0128 18:59:41.513342 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:59:41 crc kubenswrapper[4721]: I0128 18:59:41.517428 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:59:41 crc kubenswrapper[4721]: I0128 18:59:41.987037 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:59:43 crc kubenswrapper[4721]: I0128 18:59:43.911422 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:59:47 crc kubenswrapper[4721]: I0128 18:59:47.765838 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:59:47 crc kubenswrapper[4721]: I0128 18:59:47.766599 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" containerName="kube-state-metrics" containerID="cri-o://07b630721084084b0f3264478c598ab08923b8a2ea289aed886aa6302d705158" gracePeriod=30 Jan 28 18:59:47 crc kubenswrapper[4721]: I0128 18:59:47.860999 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": dial tcp 10.217.0.113:8081: connect: connection refused" Jan 28 18:59:48 crc kubenswrapper[4721]: I0128 18:59:48.044490 4721 generic.go:334] "Generic (PLEG): container finished" podID="5e16ae9a-515f-4c11-a048-84aedad18b0a" containerID="07b630721084084b0f3264478c598ab08923b8a2ea289aed886aa6302d705158" exitCode=2 Jan 28 18:59:48 crc kubenswrapper[4721]: I0128 18:59:48.044548 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5e16ae9a-515f-4c11-a048-84aedad18b0a","Type":"ContainerDied","Data":"07b630721084084b0f3264478c598ab08923b8a2ea289aed886aa6302d705158"} Jan 28 18:59:48 crc kubenswrapper[4721]: I0128 18:59:48.431431 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:59:48 crc kubenswrapper[4721]: I0128 18:59:48.593014 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtrqv\" (UniqueName: \"kubernetes.io/projected/5e16ae9a-515f-4c11-a048-84aedad18b0a-kube-api-access-mtrqv\") pod \"5e16ae9a-515f-4c11-a048-84aedad18b0a\" (UID: \"5e16ae9a-515f-4c11-a048-84aedad18b0a\") " Jan 28 18:59:48 crc kubenswrapper[4721]: I0128 18:59:48.607869 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e16ae9a-515f-4c11-a048-84aedad18b0a-kube-api-access-mtrqv" (OuterVolumeSpecName: "kube-api-access-mtrqv") pod "5e16ae9a-515f-4c11-a048-84aedad18b0a" (UID: "5e16ae9a-515f-4c11-a048-84aedad18b0a"). InnerVolumeSpecName "kube-api-access-mtrqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:48 crc kubenswrapper[4721]: I0128 18:59:48.697011 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtrqv\" (UniqueName: \"kubernetes.io/projected/5e16ae9a-515f-4c11-a048-84aedad18b0a-kube-api-access-mtrqv\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.056533 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5e16ae9a-515f-4c11-a048-84aedad18b0a","Type":"ContainerDied","Data":"c71c75f04702394c98f8ebe01f6610d83b3246ad4240b89d15b62af57d867c9b"} Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.056584 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.056605 4721 scope.go:117] "RemoveContainer" containerID="07b630721084084b0f3264478c598ab08923b8a2ea289aed886aa6302d705158" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.127534 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.151238 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.163228 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:59:49 crc kubenswrapper[4721]: E0128 18:59:49.163817 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" containerName="kube-state-metrics" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.163840 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" containerName="kube-state-metrics" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.164118 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" containerName="kube-state-metrics" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.165401 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.168996 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.169071 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.174059 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.312109 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.312191 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.312330 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.312444 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84nc\" (UniqueName: \"kubernetes.io/projected/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-api-access-s84nc\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.415336 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.415851 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s84nc\" (UniqueName: \"kubernetes.io/projected/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-api-access-s84nc\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.416219 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.416277 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.420699 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.421025 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.421382 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.438662 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s84nc\" (UniqueName: \"kubernetes.io/projected/7cb3ca8e-a112-4fa7-a165-f987728ac08f-kube-api-access-s84nc\") pod \"kube-state-metrics-0\" (UID: \"7cb3ca8e-a112-4fa7-a165-f987728ac08f\") " pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.487213 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.540891 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e16ae9a-515f-4c11-a048-84aedad18b0a" path="/var/lib/kubelet/pods/5e16ae9a-515f-4c11-a048-84aedad18b0a/volumes" Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.789150 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.789892 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-central-agent" containerID="cri-o://8ed6531b022df69a0868311fdc6c3c71674d486bc6e7c3d077366e5cfe2ee3ad" gracePeriod=30 Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.789997 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="sg-core" containerID="cri-o://bde69917b32e2f7dbd397efba96edf70037e35380468885551dd55e5ca1a1b1a" gracePeriod=30 Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.790065 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-notification-agent" containerID="cri-o://5721e0a2d55f23956363dee5913b3901b40197fca0335437f04e35db6c990e1d" gracePeriod=30 Jan 28 18:59:49 crc kubenswrapper[4721]: I0128 18:59:49.790343 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="proxy-httpd" containerID="cri-o://9a7b48d3cd85e5e87b98ece688184911e9a093ed2d847030885791674719a4b5" gracePeriod=30 Jan 28 18:59:50 crc kubenswrapper[4721]: I0128 18:59:50.002047 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:59:50 crc kubenswrapper[4721]: I0128 18:59:50.072021 4721 generic.go:334] "Generic (PLEG): container finished" podID="7a85c957-db30-4931-adbd-be40eec18aa0" containerID="9a7b48d3cd85e5e87b98ece688184911e9a093ed2d847030885791674719a4b5" exitCode=0 Jan 28 18:59:50 crc kubenswrapper[4721]: I0128 18:59:50.072071 4721 generic.go:334] "Generic (PLEG): container finished" podID="7a85c957-db30-4931-adbd-be40eec18aa0" containerID="bde69917b32e2f7dbd397efba96edf70037e35380468885551dd55e5ca1a1b1a" exitCode=2 Jan 28 18:59:50 crc kubenswrapper[4721]: I0128 18:59:50.072134 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerDied","Data":"9a7b48d3cd85e5e87b98ece688184911e9a093ed2d847030885791674719a4b5"} Jan 28 18:59:50 crc kubenswrapper[4721]: I0128 18:59:50.072190 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerDied","Data":"bde69917b32e2f7dbd397efba96edf70037e35380468885551dd55e5ca1a1b1a"} Jan 28 18:59:50 crc kubenswrapper[4721]: I0128 18:59:50.076052 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cb3ca8e-a112-4fa7-a165-f987728ac08f","Type":"ContainerStarted","Data":"97199c34dc1e9fa9c64e0e9942b605433ac2ab7c2dc97b38c5ff0655f493b1cb"} Jan 28 18:59:51 crc kubenswrapper[4721]: I0128 18:59:51.090434 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cb3ca8e-a112-4fa7-a165-f987728ac08f","Type":"ContainerStarted","Data":"087220204c0da37bb133760098a48825b06fbe0960d1c0aa90e578f6c68366fa"} Jan 28 18:59:51 crc kubenswrapper[4721]: I0128 18:59:51.091068 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 18:59:51 crc kubenswrapper[4721]: I0128 18:59:51.093935 4721 generic.go:334] "Generic (PLEG): container finished" podID="7a85c957-db30-4931-adbd-be40eec18aa0" containerID="8ed6531b022df69a0868311fdc6c3c71674d486bc6e7c3d077366e5cfe2ee3ad" exitCode=0 Jan 28 18:59:51 crc kubenswrapper[4721]: I0128 18:59:51.093983 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerDied","Data":"8ed6531b022df69a0868311fdc6c3c71674d486bc6e7c3d077366e5cfe2ee3ad"} Jan 28 18:59:51 crc kubenswrapper[4721]: I0128 18:59:51.109221 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.578721416 podStartE2EDuration="2.109150745s" podCreationTimestamp="2026-01-28 18:59:49 +0000 UTC" firstStartedPulling="2026-01-28 18:59:50.004917867 +0000 UTC m=+1555.730223427" lastFinishedPulling="2026-01-28 18:59:50.535347196 +0000 UTC m=+1556.260652756" observedRunningTime="2026-01-28 18:59:51.107508903 +0000 UTC m=+1556.832814463" watchObservedRunningTime="2026-01-28 18:59:51.109150745 +0000 UTC m=+1556.834456305" Jan 28 18:59:54 crc kubenswrapper[4721]: I0128 18:59:54.529084 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 18:59:54 crc kubenswrapper[4721]: E0128 18:59:54.529818 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.159836 4721 generic.go:334] "Generic (PLEG): container finished" podID="7a85c957-db30-4931-adbd-be40eec18aa0" containerID="5721e0a2d55f23956363dee5913b3901b40197fca0335437f04e35db6c990e1d" exitCode=0 Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.159913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerDied","Data":"5721e0a2d55f23956363dee5913b3901b40197fca0335437f04e35db6c990e1d"} Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.315447 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492030 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-run-httpd\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492403 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-scripts\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492446 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-sg-core-conf-yaml\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492503 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-log-httpd\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492630 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4srrq\" (UniqueName: \"kubernetes.io/projected/7a85c957-db30-4931-adbd-be40eec18aa0-kube-api-access-4srrq\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492642 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492699 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-config-data\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492863 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-combined-ca-bundle\") pod \"7a85c957-db30-4931-adbd-be40eec18aa0\" (UID: \"7a85c957-db30-4931-adbd-be40eec18aa0\") " Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.492993 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.493876 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.493903 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7a85c957-db30-4931-adbd-be40eec18aa0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.498508 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a85c957-db30-4931-adbd-be40eec18aa0-kube-api-access-4srrq" (OuterVolumeSpecName: "kube-api-access-4srrq") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "kube-api-access-4srrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.499743 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-scripts" (OuterVolumeSpecName: "scripts") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.526375 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.603119 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.603157 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.603182 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4srrq\" (UniqueName: \"kubernetes.io/projected/7a85c957-db30-4931-adbd-be40eec18aa0-kube-api-access-4srrq\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.647551 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.670515 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-config-data" (OuterVolumeSpecName: "config-data") pod "7a85c957-db30-4931-adbd-be40eec18aa0" (UID: "7a85c957-db30-4931-adbd-be40eec18aa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.705727 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:55 crc kubenswrapper[4721]: I0128 18:59:55.705780 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a85c957-db30-4931-adbd-be40eec18aa0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.121998 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-qbnjm"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.132915 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-qbnjm"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.173845 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7a85c957-db30-4931-adbd-be40eec18aa0","Type":"ContainerDied","Data":"bc44da487f4b09935da09c64685cd9930aba59b0d3f7dce6eb6467bbb41d5166"} Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.173923 4721 scope.go:117] "RemoveContainer" containerID="9a7b48d3cd85e5e87b98ece688184911e9a093ed2d847030885791674719a4b5" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.174348 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.203221 4721 scope.go:117] "RemoveContainer" containerID="bde69917b32e2f7dbd397efba96edf70037e35380468885551dd55e5ca1a1b1a" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.232954 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.244650 4721 scope.go:117] "RemoveContainer" containerID="5721e0a2d55f23956363dee5913b3901b40197fca0335437f04e35db6c990e1d" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.253978 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.278786 4721 scope.go:117] "RemoveContainer" containerID="8ed6531b022df69a0868311fdc6c3c71674d486bc6e7c3d077366e5cfe2ee3ad" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.295261 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-knwlk"] Jan 28 18:59:56 crc kubenswrapper[4721]: E0128 18:59:56.295846 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-notification-agent" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.295866 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-notification-agent" Jan 28 18:59:56 crc kubenswrapper[4721]: E0128 18:59:56.295894 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-central-agent" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.295903 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-central-agent" Jan 28 18:59:56 crc kubenswrapper[4721]: E0128 18:59:56.295919 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="proxy-httpd" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.295926 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="proxy-httpd" Jan 28 18:59:56 crc kubenswrapper[4721]: E0128 18:59:56.295944 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="sg-core" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.295950 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="sg-core" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.296164 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-central-agent" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.296209 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="proxy-httpd" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.296220 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="ceilometer-notification-agent" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.296234 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" containerName="sg-core" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.297089 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.299612 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.323630 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.323714 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42pgm\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-kube-api-access-42pgm\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.323773 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-certs\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.323805 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-combined-ca-bundle\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.323903 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-scripts\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.325050 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.336693 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.341324 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.341833 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.342331 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.346291 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-knwlk"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.362063 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.425720 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426278 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-certs\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426396 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-scripts\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426492 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-combined-ca-bundle\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426649 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzjb\" (UniqueName: \"kubernetes.io/projected/026c3758-a794-4177-9412-8af411eeba01-kube-api-access-2jzjb\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426729 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-log-httpd\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426797 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-run-httpd\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426866 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-config-data\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.426932 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-scripts\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.427084 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.427617 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.427718 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.427823 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42pgm\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-kube-api-access-42pgm\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.432163 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-certs\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.433269 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-scripts\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.433942 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.435491 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-combined-ca-bundle\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.444901 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42pgm\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-kube-api-access-42pgm\") pod \"cloudkitty-db-sync-knwlk\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530034 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530083 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-scripts\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530196 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzjb\" (UniqueName: \"kubernetes.io/projected/026c3758-a794-4177-9412-8af411eeba01-kube-api-access-2jzjb\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530226 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-log-httpd\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530247 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-run-httpd\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530269 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-config-data\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530384 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.530971 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-log-httpd\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.531134 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-run-httpd\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.534381 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.534810 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.534812 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-scripts\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.535429 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-config-data\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.536117 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.548799 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzjb\" (UniqueName: \"kubernetes.io/projected/026c3758-a794-4177-9412-8af411eeba01-kube-api-access-2jzjb\") pod \"ceilometer-0\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " pod="openstack/ceilometer-0" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.630974 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 18:59:56 crc kubenswrapper[4721]: I0128 18:59:56.666281 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:59:57 crc kubenswrapper[4721]: I0128 18:59:57.126804 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-knwlk"] Jan 28 18:59:57 crc kubenswrapper[4721]: W0128 18:59:57.134326 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dda9049_3b48_4939_93cc_542bf5badc4d.slice/crio-0126e35d51ede2f0297626f957e023a06f8d80857386a60938b4eb6dfb0bfc38 WatchSource:0}: Error finding container 0126e35d51ede2f0297626f957e023a06f8d80857386a60938b4eb6dfb0bfc38: Status 404 returned error can't find the container with id 0126e35d51ede2f0297626f957e023a06f8d80857386a60938b4eb6dfb0bfc38 Jan 28 18:59:57 crc kubenswrapper[4721]: I0128 18:59:57.197602 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-knwlk" event={"ID":"6dda9049-3b48-4939-93cc-542bf5badc4d","Type":"ContainerStarted","Data":"0126e35d51ede2f0297626f957e023a06f8d80857386a60938b4eb6dfb0bfc38"} Jan 28 18:59:57 crc kubenswrapper[4721]: I0128 18:59:57.292918 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:59:57 crc kubenswrapper[4721]: I0128 18:59:57.542435 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4d13db-d2ce-4194-841a-c50b85a2887c" path="/var/lib/kubelet/pods/6d4d13db-d2ce-4194-841a-c50b85a2887c/volumes" Jan 28 18:59:57 crc kubenswrapper[4721]: I0128 18:59:57.543075 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a85c957-db30-4931-adbd-be40eec18aa0" path="/var/lib/kubelet/pods/7a85c957-db30-4931-adbd-be40eec18aa0/volumes" Jan 28 18:59:58 crc kubenswrapper[4721]: I0128 18:59:58.240368 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerStarted","Data":"9ce6923df7b62816846cfdfc549cc961ba572b05b70d631f497490bb20b1b9a7"} Jan 28 18:59:58 crc kubenswrapper[4721]: I0128 18:59:58.245074 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.258962 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.273592 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerStarted","Data":"edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4"} Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.589316 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j86j7"] Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.591571 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.603979 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j86j7"] Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.617268 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-catalog-content\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.617411 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxml4\" (UniqueName: \"kubernetes.io/projected/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-kube-api-access-qxml4\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.617468 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-utilities\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.719153 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-utilities\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.719316 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-catalog-content\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.719386 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxml4\" (UniqueName: \"kubernetes.io/projected/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-kube-api-access-qxml4\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.719798 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-catalog-content\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.720051 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-utilities\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.741357 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxml4\" (UniqueName: \"kubernetes.io/projected/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-kube-api-access-qxml4\") pod \"certified-operators-j86j7\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " pod="openshift-marketplace/certified-operators-j86j7" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.763320 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 18:59:59 crc kubenswrapper[4721]: I0128 18:59:59.916850 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.178882 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7"] Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.181099 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.188732 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.189246 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.197797 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7"] Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.243740 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dadca2-0f02-42fd-be5f-0af5dec85996-config-volume\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.243874 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70dadca2-0f02-42fd-be5f-0af5dec85996-secret-volume\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.243930 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp42r\" (UniqueName: \"kubernetes.io/projected/70dadca2-0f02-42fd-be5f-0af5dec85996-kube-api-access-lp42r\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.346141 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp42r\" (UniqueName: \"kubernetes.io/projected/70dadca2-0f02-42fd-be5f-0af5dec85996-kube-api-access-lp42r\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.346309 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dadca2-0f02-42fd-be5f-0af5dec85996-config-volume\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.346468 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70dadca2-0f02-42fd-be5f-0af5dec85996-secret-volume\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.347468 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dadca2-0f02-42fd-be5f-0af5dec85996-config-volume\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.356835 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70dadca2-0f02-42fd-be5f-0af5dec85996-secret-volume\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.381944 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp42r\" (UniqueName: \"kubernetes.io/projected/70dadca2-0f02-42fd-be5f-0af5dec85996-kube-api-access-lp42r\") pod \"collect-profiles-29493780-8zrf7\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.543455 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:00 crc kubenswrapper[4721]: I0128 19:00:00.885850 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j86j7"] Jan 28 19:00:00 crc kubenswrapper[4721]: W0128 19:00:00.902980 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71e5f7b3_cd41_40d9_ab6c_e90cff64e601.slice/crio-2c0f16b852949ea8ad5532d585821f02743e8df05db99c04fe94b6e6091339ae WatchSource:0}: Error finding container 2c0f16b852949ea8ad5532d585821f02743e8df05db99c04fe94b6e6091339ae: Status 404 returned error can't find the container with id 2c0f16b852949ea8ad5532d585821f02743e8df05db99c04fe94b6e6091339ae Jan 28 19:00:01 crc kubenswrapper[4721]: I0128 19:00:01.329586 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerStarted","Data":"5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07"} Jan 28 19:00:01 crc kubenswrapper[4721]: I0128 19:00:01.329995 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerStarted","Data":"2c0f16b852949ea8ad5532d585821f02743e8df05db99c04fe94b6e6091339ae"} Jan 28 19:00:01 crc kubenswrapper[4721]: I0128 19:00:01.334577 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerStarted","Data":"081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f"} Jan 28 19:00:01 crc kubenswrapper[4721]: W0128 19:00:01.451376 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70dadca2_0f02_42fd_be5f_0af5dec85996.slice/crio-1448173c73e7fd69da10142fc073a113db46334aca00156c7ec1482cb157a3bf WatchSource:0}: Error finding container 1448173c73e7fd69da10142fc073a113db46334aca00156c7ec1482cb157a3bf: Status 404 returned error can't find the container with id 1448173c73e7fd69da10142fc073a113db46334aca00156c7ec1482cb157a3bf Jan 28 19:00:01 crc kubenswrapper[4721]: I0128 19:00:01.470538 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7"] Jan 28 19:00:02 crc kubenswrapper[4721]: I0128 19:00:02.359076 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" event={"ID":"70dadca2-0f02-42fd-be5f-0af5dec85996","Type":"ContainerStarted","Data":"2d549265a5e25e919a442b2597285571a2872abce9e354c926d45f6f8864973d"} Jan 28 19:00:02 crc kubenswrapper[4721]: I0128 19:00:02.359662 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" event={"ID":"70dadca2-0f02-42fd-be5f-0af5dec85996","Type":"ContainerStarted","Data":"1448173c73e7fd69da10142fc073a113db46334aca00156c7ec1482cb157a3bf"} Jan 28 19:00:02 crc kubenswrapper[4721]: I0128 19:00:02.368411 4721 generic.go:334] "Generic (PLEG): container finished" podID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerID="5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07" exitCode=0 Jan 28 19:00:02 crc kubenswrapper[4721]: I0128 19:00:02.368489 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerDied","Data":"5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07"} Jan 28 19:00:02 crc kubenswrapper[4721]: I0128 19:00:02.381387 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerStarted","Data":"3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18"} Jan 28 19:00:02 crc kubenswrapper[4721]: I0128 19:00:02.401183 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" podStartSLOduration=2.40113996 podStartE2EDuration="2.40113996s" podCreationTimestamp="2026-01-28 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:02.389466339 +0000 UTC m=+1568.114771899" watchObservedRunningTime="2026-01-28 19:00:02.40113996 +0000 UTC m=+1568.126445520" Jan 28 19:00:03 crc kubenswrapper[4721]: I0128 19:00:03.044626 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 19:00:03 crc kubenswrapper[4721]: I0128 19:00:03.395498 4721 generic.go:334] "Generic (PLEG): container finished" podID="70dadca2-0f02-42fd-be5f-0af5dec85996" containerID="2d549265a5e25e919a442b2597285571a2872abce9e354c926d45f6f8864973d" exitCode=0 Jan 28 19:00:03 crc kubenswrapper[4721]: I0128 19:00:03.395859 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" event={"ID":"70dadca2-0f02-42fd-be5f-0af5dec85996","Type":"ContainerDied","Data":"2d549265a5e25e919a442b2597285571a2872abce9e354c926d45f6f8864973d"} Jan 28 19:00:04 crc kubenswrapper[4721]: I0128 19:00:04.622892 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerName="rabbitmq" containerID="cri-o://00a91f1b683af04ea057b66dac35f2c915e65c757dfb533f455f370c35f0e79a" gracePeriod=604794 Jan 28 19:00:04 crc kubenswrapper[4721]: I0128 19:00:04.656064 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" containerName="rabbitmq" containerID="cri-o://8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed" gracePeriod=604795 Jan 28 19:00:05 crc kubenswrapper[4721]: I0128 19:00:05.901188 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.060771 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70dadca2-0f02-42fd-be5f-0af5dec85996-secret-volume\") pod \"70dadca2-0f02-42fd-be5f-0af5dec85996\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.061084 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dadca2-0f02-42fd-be5f-0af5dec85996-config-volume\") pod \"70dadca2-0f02-42fd-be5f-0af5dec85996\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.061243 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp42r\" (UniqueName: \"kubernetes.io/projected/70dadca2-0f02-42fd-be5f-0af5dec85996-kube-api-access-lp42r\") pod \"70dadca2-0f02-42fd-be5f-0af5dec85996\" (UID: \"70dadca2-0f02-42fd-be5f-0af5dec85996\") " Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.061850 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70dadca2-0f02-42fd-be5f-0af5dec85996-config-volume" (OuterVolumeSpecName: "config-volume") pod "70dadca2-0f02-42fd-be5f-0af5dec85996" (UID: "70dadca2-0f02-42fd-be5f-0af5dec85996"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.062525 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dadca2-0f02-42fd-be5f-0af5dec85996-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.068856 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70dadca2-0f02-42fd-be5f-0af5dec85996-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70dadca2-0f02-42fd-be5f-0af5dec85996" (UID: "70dadca2-0f02-42fd-be5f-0af5dec85996"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.069057 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70dadca2-0f02-42fd-be5f-0af5dec85996-kube-api-access-lp42r" (OuterVolumeSpecName: "kube-api-access-lp42r") pod "70dadca2-0f02-42fd-be5f-0af5dec85996" (UID: "70dadca2-0f02-42fd-be5f-0af5dec85996"). InnerVolumeSpecName "kube-api-access-lp42r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.164611 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp42r\" (UniqueName: \"kubernetes.io/projected/70dadca2-0f02-42fd-be5f-0af5dec85996-kube-api-access-lp42r\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.164663 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70dadca2-0f02-42fd-be5f-0af5dec85996-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.433476 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" event={"ID":"70dadca2-0f02-42fd-be5f-0af5dec85996","Type":"ContainerDied","Data":"1448173c73e7fd69da10142fc073a113db46334aca00156c7ec1482cb157a3bf"} Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.433823 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1448173c73e7fd69da10142fc073a113db46334aca00156c7ec1482cb157a3bf" Jan 28 19:00:06 crc kubenswrapper[4721]: I0128 19:00:06.433550 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7" Jan 28 19:00:09 crc kubenswrapper[4721]: I0128 19:00:09.533368 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:00:09 crc kubenswrapper[4721]: E0128 19:00:09.535353 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.492916 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerStarted","Data":"a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1"} Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.493140 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-central-agent" containerID="cri-o://edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4" gracePeriod=30 Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.493590 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.493235 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="proxy-httpd" containerID="cri-o://a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1" gracePeriod=30 Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.493193 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="sg-core" containerID="cri-o://3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18" gracePeriod=30 Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.493207 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-notification-agent" containerID="cri-o://081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f" gracePeriod=30 Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.499131 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-knwlk" event={"ID":"6dda9049-3b48-4939-93cc-542bf5badc4d","Type":"ContainerStarted","Data":"29c2b733a8c5cae8d48116aa58b128fe3cd775423db7b04b86a93edeb156faa6"} Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.506379 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerStarted","Data":"5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233"} Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.547864 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6625068990000003 podStartE2EDuration="14.547829594s" podCreationTimestamp="2026-01-28 18:59:56 +0000 UTC" firstStartedPulling="2026-01-28 18:59:57.306121571 +0000 UTC m=+1563.031427131" lastFinishedPulling="2026-01-28 19:00:09.191444266 +0000 UTC m=+1574.916749826" observedRunningTime="2026-01-28 19:00:10.528091936 +0000 UTC m=+1576.253397506" watchObservedRunningTime="2026-01-28 19:00:10.547829594 +0000 UTC m=+1576.273135154" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.560383 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-knwlk" podStartSLOduration=2.493230856 podStartE2EDuration="14.560357193s" podCreationTimestamp="2026-01-28 18:59:56 +0000 UTC" firstStartedPulling="2026-01-28 18:59:57.137031423 +0000 UTC m=+1562.862336983" lastFinishedPulling="2026-01-28 19:00:09.20415776 +0000 UTC m=+1574.929463320" observedRunningTime="2026-01-28 19:00:10.549138806 +0000 UTC m=+1576.274444366" watchObservedRunningTime="2026-01-28 19:00:10.560357193 +0000 UTC m=+1576.285662773" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.583008 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6s42"] Jan 28 19:00:10 crc kubenswrapper[4721]: E0128 19:00:10.583742 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dadca2-0f02-42fd-be5f-0af5dec85996" containerName="collect-profiles" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.583767 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dadca2-0f02-42fd-be5f-0af5dec85996" containerName="collect-profiles" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.584017 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="70dadca2-0f02-42fd-be5f-0af5dec85996" containerName="collect-profiles" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.585928 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.620605 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6s42"] Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.693798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmbp\" (UniqueName: \"kubernetes.io/projected/bccc5c45-eb45-452f-8f40-9e83893bf636-kube-api-access-trmbp\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.694953 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-utilities\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.695777 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-catalog-content\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.799280 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-utilities\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.799440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-catalog-content\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.799537 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trmbp\" (UniqueName: \"kubernetes.io/projected/bccc5c45-eb45-452f-8f40-9e83893bf636-kube-api-access-trmbp\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.799843 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-utilities\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.799963 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-catalog-content\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.828108 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trmbp\" (UniqueName: \"kubernetes.io/projected/bccc5c45-eb45-452f-8f40-9e83893bf636-kube-api-access-trmbp\") pod \"redhat-marketplace-k6s42\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:10 crc kubenswrapper[4721]: I0128 19:00:10.986925 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.435956 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.517964 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-tls\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.518018 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc56a986-671d-4f17-8386-939d0fd9394a-pod-info\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.518194 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-erlang-cookie\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.518217 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-server-conf\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.518241 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-plugins-conf\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.518316 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc56a986-671d-4f17-8386-939d0fd9394a-erlang-cookie-secret\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.526766 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.544908 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.545097 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-confd\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.545262 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-config-data\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.545286 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsv5k\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-kube-api-access-vsv5k\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.545348 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-plugins\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.547999 4721 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.553877 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.557616 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.560679 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc56a986-671d-4f17-8386-939d0fd9394a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.566747 4721 generic.go:334] "Generic (PLEG): container finished" podID="026c3758-a794-4177-9412-8af411eeba01" containerID="a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1" exitCode=0 Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.566791 4721 generic.go:334] "Generic (PLEG): container finished" podID="026c3758-a794-4177-9412-8af411eeba01" containerID="3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18" exitCode=2 Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.566799 4721 generic.go:334] "Generic (PLEG): container finished" podID="026c3758-a794-4177-9412-8af411eeba01" containerID="081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f" exitCode=0 Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.567645 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dc56a986-671d-4f17-8386-939d0fd9394a-pod-info" (OuterVolumeSpecName: "pod-info") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.585930 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-kube-api-access-vsv5k" (OuterVolumeSpecName: "kube-api-access-vsv5k") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "kube-api-access-vsv5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.587760 4721 generic.go:334] "Generic (PLEG): container finished" podID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerID="00a91f1b683af04ea057b66dac35f2c915e65c757dfb533f455f370c35f0e79a" exitCode=0 Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.600260 4721 generic.go:334] "Generic (PLEG): container finished" podID="dc56a986-671d-4f17-8386-939d0fd9394a" containerID="8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed" exitCode=0 Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.600526 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.618151 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.667232 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsv5k\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-kube-api-access-vsv5k\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.667845 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.667866 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.667899 4721 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc56a986-671d-4f17-8386-939d0fd9394a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.667915 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.667930 4721 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc56a986-671d-4f17-8386-939d0fd9394a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: E0128 19:00:11.680724 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff podName:dc56a986-671d-4f17-8386-939d0fd9394a nodeName:}" failed. No retries permitted until 2026-01-28 19:00:12.180682782 +0000 UTC m=+1577.905988342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.682821 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-config-data" (OuterVolumeSpecName: "config-data") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.752541 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-server-conf" (OuterVolumeSpecName: "server-conf") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.787861 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.788342 4721 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc56a986-671d-4f17-8386-939d0fd9394a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797356 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerDied","Data":"a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1"} Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797405 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6s42"] Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797479 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerDied","Data":"3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18"} Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797492 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerDied","Data":"081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f"} Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797505 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec1e1de9-b144-4c34-bb14-4c0382670f45","Type":"ContainerDied","Data":"00a91f1b683af04ea057b66dac35f2c915e65c757dfb533f455f370c35f0e79a"} Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797530 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc56a986-671d-4f17-8386-939d0fd9394a","Type":"ContainerDied","Data":"8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed"} Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797548 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc56a986-671d-4f17-8386-939d0fd9394a","Type":"ContainerDied","Data":"0c2415fd5efcbdc5cb723cd10869129c473c035c0b3f611f610b61446aaa3855"} Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.797571 4721 scope.go:117] "RemoveContainer" containerID="8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.899811 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.930981 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-jgvr4"] Jan 28 19:00:11 crc kubenswrapper[4721]: E0128 19:00:11.931916 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" containerName="setup-container" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.932025 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" containerName="setup-container" Jan 28 19:00:11 crc kubenswrapper[4721]: E0128 19:00:11.932100 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" containerName="rabbitmq" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.932300 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" containerName="rabbitmq" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.932679 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" containerName="rabbitmq" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.934389 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.937235 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 28 19:00:11 crc kubenswrapper[4721]: I0128 19:00:11.942650 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-jgvr4"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007259 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007445 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007492 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007820 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5pxc\" (UniqueName: \"kubernetes.io/projected/48a806c6-cce7-47d8-83c7-dae682f2e80f-kube-api-access-s5pxc\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007845 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007865 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.007997 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-config\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.008092 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc56a986-671d-4f17-8386-939d0fd9394a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.010070 4721 scope.go:117] "RemoveContainer" containerID="f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.036095 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.110387 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-tls\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.110509 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-plugins-conf\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113331 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113407 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-server-conf\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113545 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-confd\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113655 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-plugins\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113721 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec1e1de9-b144-4c34-bb14-4c0382670f45-pod-info\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113759 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec1e1de9-b144-4c34-bb14-4c0382670f45-erlang-cookie-secret\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.113875 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.114179 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk8vx\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-kube-api-access-dk8vx\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.114221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-erlang-cookie\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.114361 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-config-data\") pod \"ec1e1de9-b144-4c34-bb14-4c0382670f45\" (UID: \"ec1e1de9-b144-4c34-bb14-4c0382670f45\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.114721 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.114889 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.115151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5pxc\" (UniqueName: \"kubernetes.io/projected/48a806c6-cce7-47d8-83c7-dae682f2e80f-kube-api-access-s5pxc\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.115200 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.115269 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.115342 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-config\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.115796 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.115898 4721 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.116569 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.117376 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.117546 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.117989 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.118961 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.119801 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.120505 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.121043 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-config\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.121191 4721 scope.go:117] "RemoveContainer" containerID="8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.122780 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: E0128 19:00:12.122878 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed\": container with ID starting with 8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed not found: ID does not exist" containerID="8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.122910 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed"} err="failed to get container status \"8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed\": rpc error: code = NotFound desc = could not find container \"8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed\": container with ID starting with 8a6691f251788d9eb7d62515df6e29d4135993aa59e6dedb624092f45bfc4fed not found: ID does not exist" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.122936 4721 scope.go:117] "RemoveContainer" containerID="f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1" Jan 28 19:00:12 crc kubenswrapper[4721]: E0128 19:00:12.124991 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1\": container with ID starting with f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1 not found: ID does not exist" containerID="f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.125070 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1"} err="failed to get container status \"f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1\": rpc error: code = NotFound desc = could not find container \"f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1\": container with ID starting with f7340b42defbd0e6762eaa0362961d4ee3d0113dc7766d3deb6846418878cdd1 not found: ID does not exist" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.127679 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-kube-api-access-dk8vx" (OuterVolumeSpecName: "kube-api-access-dk8vx") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "kube-api-access-dk8vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.133603 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1e1de9-b144-4c34-bb14-4c0382670f45-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.138061 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ec1e1de9-b144-4c34-bb14-4c0382670f45-pod-info" (OuterVolumeSpecName: "pod-info") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.147630 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5pxc\" (UniqueName: \"kubernetes.io/projected/48a806c6-cce7-47d8-83c7-dae682f2e80f-kube-api-access-s5pxc\") pod \"dnsmasq-dns-dbb88bf8c-jgvr4\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.188951 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1" (OuterVolumeSpecName: "persistence") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.194532 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-config-data" (OuterVolumeSpecName: "config-data") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.217403 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"dc56a986-671d-4f17-8386-939d0fd9394a\" (UID: \"dc56a986-671d-4f17-8386-939d0fd9394a\") " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.217881 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.217897 4721 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec1e1de9-b144-4c34-bb14-4c0382670f45-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.217907 4721 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec1e1de9-b144-4c34-bb14-4c0382670f45-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.217917 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk8vx\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-kube-api-access-dk8vx\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.218020 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.218031 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.218074 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.218110 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") on node \"crc\" " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.257885 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-server-conf" (OuterVolumeSpecName: "server-conf") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.273970 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.274379 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1") on node "crc" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.274444 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff" (OuterVolumeSpecName: "persistence") pod "dc56a986-671d-4f17-8386-939d0fd9394a" (UID: "dc56a986-671d-4f17-8386-939d0fd9394a"). InnerVolumeSpecName "pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.309850 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.322232 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") on node \"crc\" " Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.322273 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.322288 4721 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec1e1de9-b144-4c34-bb14-4c0382670f45-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.334314 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ec1e1de9-b144-4c34-bb14-4c0382670f45" (UID: "ec1e1de9-b144-4c34-bb14-4c0382670f45"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.346780 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.370544 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.390078 4721 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.390282 4721 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff") on node "crc" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.394663 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: E0128 19:00:12.395145 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerName="rabbitmq" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.395163 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerName="rabbitmq" Jan 28 19:00:12 crc kubenswrapper[4721]: E0128 19:00:12.395351 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerName="setup-container" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.395359 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerName="setup-container" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.395617 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" containerName="rabbitmq" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.410771 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.416676 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.422836 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.423092 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.423289 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qmjxb" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.423455 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.423914 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.424229 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.426138 4721 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec1e1de9-b144-4c34-bb14-4c0382670f45-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.426189 4721 reconciler_common.go:293] "Volume detached for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.438702 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.533645 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.533697 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.533770 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a493b27e-e634-4b09-ae05-2a69c5ad0d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.533870 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.533975 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.534013 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.534115 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a493b27e-e634-4b09-ae05-2a69c5ad0d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.534139 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.534219 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdwj\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-kube-api-access-9mdwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.534245 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.534300 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.617850 4721 generic.go:334] "Generic (PLEG): container finished" podID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerID="3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b" exitCode=0 Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.618136 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerDied","Data":"3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b"} Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.618160 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerStarted","Data":"44bf25580943b3acfeba90fd50acc7d1b1cf5e5fb605ff1b141fe10bf18a8927"} Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.622279 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec1e1de9-b144-4c34-bb14-4c0382670f45","Type":"ContainerDied","Data":"e545f4e4f58ca348dd389b0f6a5e72f9d095e41aa4325456d5ee98c37276de6a"} Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.622297 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.622342 4721 scope.go:117] "RemoveContainer" containerID="00a91f1b683af04ea057b66dac35f2c915e65c757dfb533f455f370c35f0e79a" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.625729 4721 generic.go:334] "Generic (PLEG): container finished" podID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerID="5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233" exitCode=0 Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.625769 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerDied","Data":"5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233"} Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.638892 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.638948 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639019 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a493b27e-e634-4b09-ae05-2a69c5ad0d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639051 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639105 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdwj\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-kube-api-access-9mdwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639145 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639220 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639367 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639393 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639446 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a493b27e-e634-4b09-ae05-2a69c5ad0d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.639529 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.640193 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.641759 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.642583 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.643210 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.644203 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a493b27e-e634-4b09-ae05-2a69c5ad0d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.645693 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a493b27e-e634-4b09-ae05-2a69c5ad0d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.645704 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.646551 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a493b27e-e634-4b09-ae05-2a69c5ad0d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.647278 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.647300 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5bc89d46b155de3097f77aee48e0273231559873bb6737e5f04966de38376c61/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.648897 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.670323 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdwj\" (UniqueName: \"kubernetes.io/projected/a493b27e-e634-4b09-ae05-2a69c5ad0d68-kube-api-access-9mdwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.706761 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29174ae9-29d2-42fb-b1ad-7ccc07b143ff\") pod \"rabbitmq-cell1-server-0\" (UID: \"a493b27e-e634-4b09-ae05-2a69c5ad0d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.737196 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.776305 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.783521 4721 scope.go:117] "RemoveContainer" containerID="1744104dd2c6db657749ff29714a2574a58c6368538f7d3e645044ef7a0b215d" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.791940 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.832348 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.834701 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.839655 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.839927 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.840620 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.840747 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ppn4t" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.840856 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.841025 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.841139 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.863736 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.929043 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-jgvr4"] Jan 28 19:00:12 crc kubenswrapper[4721]: W0128 19:00:12.939154 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a806c6_cce7_47d8_83c7_dae682f2e80f.slice/crio-bc0e20f60e62ce58d6f535d73aafff3b16eec0d1191ffd65148e6b3109299418 WatchSource:0}: Error finding container bc0e20f60e62ce58d6f535d73aafff3b16eec0d1191ffd65148e6b3109299418: Status 404 returned error can't find the container with id bc0e20f60e62ce58d6f535d73aafff3b16eec0d1191ffd65148e6b3109299418 Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.948508 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.948850 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.948956 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/88f1129c-54fc-423a-993d-560aecdd75eb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.949159 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.949445 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8h54\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-kube-api-access-l8h54\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.949592 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/88f1129c-54fc-423a-993d-560aecdd75eb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.949738 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.949946 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.950079 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.950218 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-config-data\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:12 crc kubenswrapper[4721]: I0128 19:00:12.950413 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.054694 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.054815 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.054852 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-config-data\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.054919 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.054998 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.055022 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.055043 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/88f1129c-54fc-423a-993d-560aecdd75eb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.055095 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.055226 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8h54\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-kube-api-access-l8h54\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.055273 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/88f1129c-54fc-423a-993d-560aecdd75eb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.055328 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.056335 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.057958 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-config-data\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.059236 4721 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.059272 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47cccac3c6f853ec6e999145e5e217a1590d8accec8418bbeb0b34e74219920b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.059349 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.060943 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/88f1129c-54fc-423a-993d-560aecdd75eb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.062287 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.062684 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.063718 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/88f1129c-54fc-423a-993d-560aecdd75eb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.067390 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.078488 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/88f1129c-54fc-423a-993d-560aecdd75eb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.084607 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8h54\" (UniqueName: \"kubernetes.io/projected/88f1129c-54fc-423a-993d-560aecdd75eb-kube-api-access-l8h54\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.153006 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3e4de0b-10d4-4505-954f-7ef612e624a1\") pod \"rabbitmq-server-0\" (UID: \"88f1129c-54fc-423a-993d-560aecdd75eb\") " pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.279749 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.430862 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 19:00:13 crc kubenswrapper[4721]: W0128 19:00:13.436217 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda493b27e_e634_4b09_ae05_2a69c5ad0d68.slice/crio-f95b378f49bd1845b8c46819bfb1680316ed7e1bdc4773eae1c3f3a9251f9b9f WatchSource:0}: Error finding container f95b378f49bd1845b8c46819bfb1680316ed7e1bdc4773eae1c3f3a9251f9b9f: Status 404 returned error can't find the container with id f95b378f49bd1845b8c46819bfb1680316ed7e1bdc4773eae1c3f3a9251f9b9f Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.556238 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc56a986-671d-4f17-8386-939d0fd9394a" path="/var/lib/kubelet/pods/dc56a986-671d-4f17-8386-939d0fd9394a/volumes" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.557543 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1e1de9-b144-4c34-bb14-4c0382670f45" path="/var/lib/kubelet/pods/ec1e1de9-b144-4c34-bb14-4c0382670f45/volumes" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.648997 4721 generic.go:334] "Generic (PLEG): container finished" podID="6dda9049-3b48-4939-93cc-542bf5badc4d" containerID="29c2b733a8c5cae8d48116aa58b128fe3cd775423db7b04b86a93edeb156faa6" exitCode=0 Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.649097 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-knwlk" event={"ID":"6dda9049-3b48-4939-93cc-542bf5badc4d","Type":"ContainerDied","Data":"29c2b733a8c5cae8d48116aa58b128fe3cd775423db7b04b86a93edeb156faa6"} Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.677010 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerStarted","Data":"27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9"} Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.707349 4721 generic.go:334] "Generic (PLEG): container finished" podID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerID="209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c" exitCode=0 Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.707424 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" event={"ID":"48a806c6-cce7-47d8-83c7-dae682f2e80f","Type":"ContainerDied","Data":"209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c"} Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.707453 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" event={"ID":"48a806c6-cce7-47d8-83c7-dae682f2e80f","Type":"ContainerStarted","Data":"bc0e20f60e62ce58d6f535d73aafff3b16eec0d1191ffd65148e6b3109299418"} Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.732261 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j86j7" podStartSLOduration=7.315696289 podStartE2EDuration="14.732211919s" podCreationTimestamp="2026-01-28 18:59:59 +0000 UTC" firstStartedPulling="2026-01-28 19:00:05.764770356 +0000 UTC m=+1571.490075916" lastFinishedPulling="2026-01-28 19:00:13.181285986 +0000 UTC m=+1578.906591546" observedRunningTime="2026-01-28 19:00:13.714993521 +0000 UTC m=+1579.440299091" watchObservedRunningTime="2026-01-28 19:00:13.732211919 +0000 UTC m=+1579.457517479" Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.735451 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a493b27e-e634-4b09-ae05-2a69c5ad0d68","Type":"ContainerStarted","Data":"f95b378f49bd1845b8c46819bfb1680316ed7e1bdc4773eae1c3f3a9251f9b9f"} Jan 28 19:00:13 crc kubenswrapper[4721]: I0128 19:00:13.870668 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.713790 4721 scope.go:117] "RemoveContainer" containerID="c03ee72a8b18b82c44802b661ca7fc04b3039f3c3a94468e3e07d22479fd07b1" Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.774776 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" event={"ID":"48a806c6-cce7-47d8-83c7-dae682f2e80f","Type":"ContainerStarted","Data":"69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c"} Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.775011 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.789787 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"88f1129c-54fc-423a-993d-560aecdd75eb","Type":"ContainerStarted","Data":"a0ddc6d0910125ee7d6fe2dc025f62b1d6e01034ed94f46a907da61996f64171"} Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.810578 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerStarted","Data":"f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62"} Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.814984 4721 scope.go:117] "RemoveContainer" containerID="5e135d6440af40c8c0b7212a6d5dccd74d2442655a3bdd266d811e697bb4d9b1" Jan 28 19:00:14 crc kubenswrapper[4721]: I0128 19:00:14.817598 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" podStartSLOduration=3.817581347 podStartE2EDuration="3.817581347s" podCreationTimestamp="2026-01-28 19:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:14.798653325 +0000 UTC m=+1580.523958905" watchObservedRunningTime="2026-01-28 19:00:14.817581347 +0000 UTC m=+1580.542886907" Jan 28 19:00:15 crc kubenswrapper[4721]: I0128 19:00:15.843536 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-knwlk" event={"ID":"6dda9049-3b48-4939-93cc-542bf5badc4d","Type":"ContainerDied","Data":"0126e35d51ede2f0297626f957e023a06f8d80857386a60938b4eb6dfb0bfc38"} Jan 28 19:00:15 crc kubenswrapper[4721]: I0128 19:00:15.844123 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0126e35d51ede2f0297626f957e023a06f8d80857386a60938b4eb6dfb0bfc38" Jan 28 19:00:15 crc kubenswrapper[4721]: I0128 19:00:15.878813 4721 scope.go:117] "RemoveContainer" containerID="1d538e393ca91bbeb837e75b6debff2f56a274b53b10f9112872486504abcbb6" Jan 28 19:00:15 crc kubenswrapper[4721]: I0128 19:00:15.899346 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.013782 4721 scope.go:117] "RemoveContainer" containerID="1e81e6a440865f58a8a00ad6f945396888eecb4ac069b57fdc0548d00edf0fdf" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.038523 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-scripts\") pod \"6dda9049-3b48-4939-93cc-542bf5badc4d\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.038583 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42pgm\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-kube-api-access-42pgm\") pod \"6dda9049-3b48-4939-93cc-542bf5badc4d\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.038825 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-certs\") pod \"6dda9049-3b48-4939-93cc-542bf5badc4d\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.038883 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-combined-ca-bundle\") pod \"6dda9049-3b48-4939-93cc-542bf5badc4d\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.038931 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data\") pod \"6dda9049-3b48-4939-93cc-542bf5badc4d\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.108399 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-scripts" (OuterVolumeSpecName: "scripts") pod "6dda9049-3b48-4939-93cc-542bf5badc4d" (UID: "6dda9049-3b48-4939-93cc-542bf5badc4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.108564 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-kube-api-access-42pgm" (OuterVolumeSpecName: "kube-api-access-42pgm") pod "6dda9049-3b48-4939-93cc-542bf5badc4d" (UID: "6dda9049-3b48-4939-93cc-542bf5badc4d"). InnerVolumeSpecName "kube-api-access-42pgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.120486 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-certs" (OuterVolumeSpecName: "certs") pod "6dda9049-3b48-4939-93cc-542bf5badc4d" (UID: "6dda9049-3b48-4939-93cc-542bf5badc4d"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:16 crc kubenswrapper[4721]: E0128 19:00:16.132483 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data podName:6dda9049-3b48-4939-93cc-542bf5badc4d nodeName:}" failed. No retries permitted until 2026-01-28 19:00:16.632445234 +0000 UTC m=+1582.357750794 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data") pod "6dda9049-3b48-4939-93cc-542bf5badc4d" (UID: "6dda9049-3b48-4939-93cc-542bf5badc4d") : error deleting /var/lib/kubelet/pods/6dda9049-3b48-4939-93cc-542bf5badc4d/volume-subpaths: remove /var/lib/kubelet/pods/6dda9049-3b48-4939-93cc-542bf5badc4d/volume-subpaths: no such file or directory Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.133140 4721 scope.go:117] "RemoveContainer" containerID="1e3b225685548a877b08564af4b407871263c0c268b69d5776a7e29398768945" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.137239 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dda9049-3b48-4939-93cc-542bf5badc4d" (UID: "6dda9049-3b48-4939-93cc-542bf5badc4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.141674 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.141712 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.141723 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42pgm\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-kube-api-access-42pgm\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.141734 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/6dda9049-3b48-4939-93cc-542bf5badc4d-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.652147 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data\") pod \"6dda9049-3b48-4939-93cc-542bf5badc4d\" (UID: \"6dda9049-3b48-4939-93cc-542bf5badc4d\") " Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.660356 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data" (OuterVolumeSpecName: "config-data") pod "6dda9049-3b48-4939-93cc-542bf5badc4d" (UID: "6dda9049-3b48-4939-93cc-542bf5badc4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.757980 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dda9049-3b48-4939-93cc-542bf5badc4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.860690 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a493b27e-e634-4b09-ae05-2a69c5ad0d68","Type":"ContainerStarted","Data":"5bb8b16913fae619aa4c67b8f79f8e2acfba14af834cde80a4947bf0e9b8b398"} Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.865098 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"88f1129c-54fc-423a-993d-560aecdd75eb","Type":"ContainerStarted","Data":"4f892c8855c5c43cfd71e18a26303a0e5dc6bb57ccd7326172b9108ac9c15cb3"} Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.868216 4721 generic.go:334] "Generic (PLEG): container finished" podID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerID="f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62" exitCode=0 Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.868283 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-knwlk" Jan 28 19:00:16 crc kubenswrapper[4721]: I0128 19:00:16.868723 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerDied","Data":"f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62"} Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.113792 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-xpxnz"] Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.124950 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-xpxnz"] Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.201771 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-pwsl7"] Jan 28 19:00:17 crc kubenswrapper[4721]: E0128 19:00:17.202274 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dda9049-3b48-4939-93cc-542bf5badc4d" containerName="cloudkitty-db-sync" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.202292 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dda9049-3b48-4939-93cc-542bf5badc4d" containerName="cloudkitty-db-sync" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.202522 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dda9049-3b48-4939-93cc-542bf5badc4d" containerName="cloudkitty-db-sync" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.203315 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.205530 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.211986 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-pwsl7"] Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.390226 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-certs\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.390624 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-combined-ca-bundle\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.390652 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5ll\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-kube-api-access-wm5ll\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.390682 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-config-data\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.390843 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-scripts\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.493329 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-combined-ca-bundle\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.493379 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5ll\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-kube-api-access-wm5ll\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.493414 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-config-data\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.493523 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-scripts\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.493620 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-certs\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.504604 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-scripts\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.504708 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-combined-ca-bundle\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.504803 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-certs\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.505283 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-config-data\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.517267 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5ll\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-kube-api-access-wm5ll\") pod \"cloudkitty-storageinit-pwsl7\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.523038 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:17 crc kubenswrapper[4721]: I0128 19:00:17.542779 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429d95dc-53bf-4577-bd4a-3bd60e502895" path="/var/lib/kubelet/pods/429d95dc-53bf-4577-bd4a-3bd60e502895/volumes" Jan 28 19:00:18 crc kubenswrapper[4721]: W0128 19:00:18.027915 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fc7453_7e1b_4e3f_bac3_f045c7b6a1c4.slice/crio-dcfb3f6a895e9d43ff073b858a6cf7434db2d6ab98e68ed6ef80e98350f2acde WatchSource:0}: Error finding container dcfb3f6a895e9d43ff073b858a6cf7434db2d6ab98e68ed6ef80e98350f2acde: Status 404 returned error can't find the container with id dcfb3f6a895e9d43ff073b858a6cf7434db2d6ab98e68ed6ef80e98350f2acde Jan 28 19:00:18 crc kubenswrapper[4721]: I0128 19:00:18.036628 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-pwsl7"] Jan 28 19:00:18 crc kubenswrapper[4721]: I0128 19:00:18.896600 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerStarted","Data":"010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee"} Jan 28 19:00:18 crc kubenswrapper[4721]: I0128 19:00:18.898450 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-pwsl7" event={"ID":"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4","Type":"ContainerStarted","Data":"c9347359b05c7170adeef3caaebd6a81cc6189a67ee6aae1b082059a009b3697"} Jan 28 19:00:18 crc kubenswrapper[4721]: I0128 19:00:18.898491 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-pwsl7" event={"ID":"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4","Type":"ContainerStarted","Data":"dcfb3f6a895e9d43ff073b858a6cf7434db2d6ab98e68ed6ef80e98350f2acde"} Jan 28 19:00:18 crc kubenswrapper[4721]: I0128 19:00:18.920065 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6s42" podStartSLOduration=3.845415409 podStartE2EDuration="8.920044s" podCreationTimestamp="2026-01-28 19:00:10 +0000 UTC" firstStartedPulling="2026-01-28 19:00:12.620534853 +0000 UTC m=+1578.345840413" lastFinishedPulling="2026-01-28 19:00:17.695163444 +0000 UTC m=+1583.420469004" observedRunningTime="2026-01-28 19:00:18.91344733 +0000 UTC m=+1584.638752900" watchObservedRunningTime="2026-01-28 19:00:18.920044 +0000 UTC m=+1584.645349560" Jan 28 19:00:18 crc kubenswrapper[4721]: I0128 19:00:18.937588 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-pwsl7" podStartSLOduration=1.937561306 podStartE2EDuration="1.937561306s" podCreationTimestamp="2026-01-28 19:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:18.929157359 +0000 UTC m=+1584.654462909" watchObservedRunningTime="2026-01-28 19:00:18.937561306 +0000 UTC m=+1584.662866876" Jan 28 19:00:19 crc kubenswrapper[4721]: I0128 19:00:19.917646 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:19 crc kubenswrapper[4721]: I0128 19:00:19.918255 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:19 crc kubenswrapper[4721]: I0128 19:00:19.981154 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:20 crc kubenswrapper[4721]: I0128 19:00:20.923073 4721 generic.go:334] "Generic (PLEG): container finished" podID="f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" containerID="c9347359b05c7170adeef3caaebd6a81cc6189a67ee6aae1b082059a009b3697" exitCode=0 Jan 28 19:00:20 crc kubenswrapper[4721]: I0128 19:00:20.923162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-pwsl7" event={"ID":"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4","Type":"ContainerDied","Data":"c9347359b05c7170adeef3caaebd6a81cc6189a67ee6aae1b082059a009b3697"} Jan 28 19:00:20 crc kubenswrapper[4721]: I0128 19:00:20.979285 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:20 crc kubenswrapper[4721]: I0128 19:00:20.987743 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:20 crc kubenswrapper[4721]: I0128 19:00:20.987784 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:21 crc kubenswrapper[4721]: I0128 19:00:21.057071 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j86j7"] Jan 28 19:00:21 crc kubenswrapper[4721]: I0128 19:00:21.063375 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:21 crc kubenswrapper[4721]: I0128 19:00:21.530695 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:00:21 crc kubenswrapper[4721]: E0128 19:00:21.531452 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.312406 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.393026 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mx67n"] Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.393280 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerName="dnsmasq-dns" containerID="cri-o://64a92fda9552be03fcca0561239e0c782cdd2538b99c6270cae1e5419793eef2" gracePeriod=10 Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.431066 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.513583 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-certs\") pod \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.513782 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-combined-ca-bundle\") pod \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.513853 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-config-data\") pod \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.513943 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-scripts\") pod \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.514062 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5ll\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-kube-api-access-wm5ll\") pod \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\" (UID: \"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4\") " Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.523156 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-scripts" (OuterVolumeSpecName: "scripts") pod "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" (UID: "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.523324 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-kube-api-access-wm5ll" (OuterVolumeSpecName: "kube-api-access-wm5ll") pod "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" (UID: "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4"). InnerVolumeSpecName "kube-api-access-wm5ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.544400 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-certs" (OuterVolumeSpecName: "certs") pod "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" (UID: "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.555385 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-config-data" (OuterVolumeSpecName: "config-data") pod "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" (UID: "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.560456 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" (UID: "f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.617021 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.617308 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.617382 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5ll\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-kube-api-access-wm5ll\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.617455 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.617532 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.689230 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-zg2ch"] Jan 28 19:00:22 crc kubenswrapper[4721]: E0128 19:00:22.689818 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" containerName="cloudkitty-storageinit" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.689846 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" containerName="cloudkitty-storageinit" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.690095 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" containerName="cloudkitty-storageinit" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.691772 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.719896 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-zg2ch"] Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824289 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824389 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824487 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-dns-svc\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824516 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824583 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-config\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824810 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.824864 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp67z\" (UniqueName: \"kubernetes.io/projected/a2de6f20-e053-456e-860d-c85c1ae57874-kube-api-access-fp67z\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931005 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931066 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-dns-svc\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931127 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-config\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931305 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931343 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp67z\" (UniqueName: \"kubernetes.io/projected/a2de6f20-e053-456e-860d-c85c1ae57874-kube-api-access-fp67z\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931376 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.931413 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.932505 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-dns-svc\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.933348 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.933643 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.933983 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-config\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.934011 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.937284 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2de6f20-e053-456e-860d-c85c1ae57874-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.961581 4721 generic.go:334] "Generic (PLEG): container finished" podID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerID="64a92fda9552be03fcca0561239e0c782cdd2538b99c6270cae1e5419793eef2" exitCode=0 Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.961666 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" event={"ID":"4bc30432-0868-448c-b124-8b9db2d2a6b2","Type":"ContainerDied","Data":"64a92fda9552be03fcca0561239e0c782cdd2538b99c6270cae1e5419793eef2"} Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.961701 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" event={"ID":"4bc30432-0868-448c-b124-8b9db2d2a6b2","Type":"ContainerDied","Data":"fd7756364455cfa898557a021743f7faa24986d8b67a7daaf0d8af72059547c4"} Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.961716 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd7756364455cfa898557a021743f7faa24986d8b67a7daaf0d8af72059547c4" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.963063 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp67z\" (UniqueName: \"kubernetes.io/projected/a2de6f20-e053-456e-860d-c85c1ae57874-kube-api-access-fp67z\") pod \"dnsmasq-dns-85f64749dc-zg2ch\" (UID: \"a2de6f20-e053-456e-860d-c85c1ae57874\") " pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.968695 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j86j7" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="registry-server" containerID="cri-o://27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9" gracePeriod=2 Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.969039 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-pwsl7" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.969548 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-pwsl7" event={"ID":"f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4","Type":"ContainerDied","Data":"dcfb3f6a895e9d43ff073b858a6cf7434db2d6ab98e68ed6ef80e98350f2acde"} Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.969590 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfb3f6a895e9d43ff073b858a6cf7434db2d6ab98e68ed6ef80e98350f2acde" Jan 28 19:00:22 crc kubenswrapper[4721]: I0128 19:00:22.999890 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.030660 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.140334 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wldhq\" (UniqueName: \"kubernetes.io/projected/4bc30432-0868-448c-b124-8b9db2d2a6b2-kube-api-access-wldhq\") pod \"4bc30432-0868-448c-b124-8b9db2d2a6b2\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.140489 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-svc\") pod \"4bc30432-0868-448c-b124-8b9db2d2a6b2\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.140540 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-sb\") pod \"4bc30432-0868-448c-b124-8b9db2d2a6b2\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.145653 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-nb\") pod \"4bc30432-0868-448c-b124-8b9db2d2a6b2\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.145706 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-config\") pod \"4bc30432-0868-448c-b124-8b9db2d2a6b2\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.145867 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-swift-storage-0\") pod \"4bc30432-0868-448c-b124-8b9db2d2a6b2\" (UID: \"4bc30432-0868-448c-b124-8b9db2d2a6b2\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.152874 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc30432-0868-448c-b124-8b9db2d2a6b2-kube-api-access-wldhq" (OuterVolumeSpecName: "kube-api-access-wldhq") pod "4bc30432-0868-448c-b124-8b9db2d2a6b2" (UID: "4bc30432-0868-448c-b124-8b9db2d2a6b2"). InnerVolumeSpecName "kube-api-access-wldhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.220913 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.221320 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="8f4cfc8a-e4d7-4579-b2cd-303abce60b03" containerName="cloudkitty-proc" containerID="cri-o://1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9" gracePeriod=30 Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.248249 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.248813 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api-log" containerID="cri-o://df25b2d57cc0161105ae6bcc96fc2e8c0455ecf5c000f5b78c47a2ffc805591e" gracePeriod=30 Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.249139 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api" containerID="cri-o://2740c8bb07a2969fd701089a09fd1fe230cd0d121f6859930d10da8f91fe65c5" gracePeriod=30 Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.250302 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wldhq\" (UniqueName: \"kubernetes.io/projected/4bc30432-0868-448c-b124-8b9db2d2a6b2-kube-api-access-wldhq\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.250566 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bc30432-0868-448c-b124-8b9db2d2a6b2" (UID: "4bc30432-0868-448c-b124-8b9db2d2a6b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.260940 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-config" (OuterVolumeSpecName: "config") pod "4bc30432-0868-448c-b124-8b9db2d2a6b2" (UID: "4bc30432-0868-448c-b124-8b9db2d2a6b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.266495 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bc30432-0868-448c-b124-8b9db2d2a6b2" (UID: "4bc30432-0868-448c-b124-8b9db2d2a6b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.301480 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bc30432-0868-448c-b124-8b9db2d2a6b2" (UID: "4bc30432-0868-448c-b124-8b9db2d2a6b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.328440 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bc30432-0868-448c-b124-8b9db2d2a6b2" (UID: "4bc30432-0868-448c-b124-8b9db2d2a6b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.368609 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.368660 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.368672 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.368682 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.368691 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc30432-0868-448c-b124-8b9db2d2a6b2-config\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.791780 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-zg2ch"] Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.877091 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.993108 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-utilities\") pod \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.993419 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-catalog-content\") pod \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " Jan 28 19:00:23 crc kubenswrapper[4721]: I0128 19:00:23.993510 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxml4\" (UniqueName: \"kubernetes.io/projected/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-kube-api-access-qxml4\") pod \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\" (UID: \"71e5f7b3-cd41-40d9-ab6c-e90cff64e601\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.003637 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j86j7" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.003856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerDied","Data":"27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9"} Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.004514 4721 scope.go:117] "RemoveContainer" containerID="27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.006258 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-kube-api-access-qxml4" (OuterVolumeSpecName: "kube-api-access-qxml4") pod "71e5f7b3-cd41-40d9-ab6c-e90cff64e601" (UID: "71e5f7b3-cd41-40d9-ab6c-e90cff64e601"). InnerVolumeSpecName "kube-api-access-qxml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.006303 4721 generic.go:334] "Generic (PLEG): container finished" podID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerID="27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9" exitCode=0 Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.006408 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j86j7" event={"ID":"71e5f7b3-cd41-40d9-ab6c-e90cff64e601","Type":"ContainerDied","Data":"2c0f16b852949ea8ad5532d585821f02743e8df05db99c04fe94b6e6091339ae"} Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.013666 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-utilities" (OuterVolumeSpecName: "utilities") pod "71e5f7b3-cd41-40d9-ab6c-e90cff64e601" (UID: "71e5f7b3-cd41-40d9-ab6c-e90cff64e601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.023639 4721 generic.go:334] "Generic (PLEG): container finished" podID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerID="df25b2d57cc0161105ae6bcc96fc2e8c0455ecf5c000f5b78c47a2ffc805591e" exitCode=143 Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.023726 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"73d83a88-618a-4208-aaa8-e209c0d34b1d","Type":"ContainerDied","Data":"df25b2d57cc0161105ae6bcc96fc2e8c0455ecf5c000f5b78c47a2ffc805591e"} Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.026905 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-mx67n" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.027668 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" event={"ID":"a2de6f20-e053-456e-860d-c85c1ae57874","Type":"ContainerStarted","Data":"eb1c81329e06ecf6208011e1d12f7e27588782001d2b080bfd85e2417649f124"} Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.065581 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mx67n"] Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.080405 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-mx67n"] Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.091578 4721 scope.go:117] "RemoveContainer" containerID="5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.094157 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71e5f7b3-cd41-40d9-ab6c-e90cff64e601" (UID: "71e5f7b3-cd41-40d9-ab6c-e90cff64e601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.098633 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.098670 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.098686 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxml4\" (UniqueName: \"kubernetes.io/projected/71e5f7b3-cd41-40d9-ab6c-e90cff64e601-kube-api-access-qxml4\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.218097 4721 scope.go:117] "RemoveContainer" containerID="5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.253969 4721 scope.go:117] "RemoveContainer" containerID="27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9" Jan 28 19:00:24 crc kubenswrapper[4721]: E0128 19:00:24.254553 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9\": container with ID starting with 27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9 not found: ID does not exist" containerID="27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.254583 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9"} err="failed to get container status \"27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9\": rpc error: code = NotFound desc = could not find container \"27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9\": container with ID starting with 27cb43ac0d681f772672abee6a94028c2e378153ffea1ae5267f6796019f10e9 not found: ID does not exist" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.254605 4721 scope.go:117] "RemoveContainer" containerID="5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233" Jan 28 19:00:24 crc kubenswrapper[4721]: E0128 19:00:24.255043 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233\": container with ID starting with 5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233 not found: ID does not exist" containerID="5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.255097 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233"} err="failed to get container status \"5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233\": rpc error: code = NotFound desc = could not find container \"5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233\": container with ID starting with 5b4b11168436d060da26b67c42623f3e80b73abdfdca1743e2f279ef9d6f2233 not found: ID does not exist" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.255135 4721 scope.go:117] "RemoveContainer" containerID="5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07" Jan 28 19:00:24 crc kubenswrapper[4721]: E0128 19:00:24.256817 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07\": container with ID starting with 5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07 not found: ID does not exist" containerID="5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.256877 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07"} err="failed to get container status \"5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07\": rpc error: code = NotFound desc = could not find container \"5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07\": container with ID starting with 5421fc3563c619f39c83425982441fd28a7037746b9b96e0bff43b3fcc9edd07 not found: ID does not exist" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.398923 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j86j7"] Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.412267 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j86j7"] Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.684461 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.814876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data\") pod \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.814979 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k55n\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-kube-api-access-2k55n\") pod \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.815226 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-combined-ca-bundle\") pod \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.815276 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-certs\") pod \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.815327 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-scripts\") pod \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.815492 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data-custom\") pod \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\" (UID: \"8f4cfc8a-e4d7-4579-b2cd-303abce60b03\") " Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.821758 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f4cfc8a-e4d7-4579-b2cd-303abce60b03" (UID: "8f4cfc8a-e4d7-4579-b2cd-303abce60b03"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.823463 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-kube-api-access-2k55n" (OuterVolumeSpecName: "kube-api-access-2k55n") pod "8f4cfc8a-e4d7-4579-b2cd-303abce60b03" (UID: "8f4cfc8a-e4d7-4579-b2cd-303abce60b03"). InnerVolumeSpecName "kube-api-access-2k55n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.825536 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-certs" (OuterVolumeSpecName: "certs") pod "8f4cfc8a-e4d7-4579-b2cd-303abce60b03" (UID: "8f4cfc8a-e4d7-4579-b2cd-303abce60b03"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.830819 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-scripts" (OuterVolumeSpecName: "scripts") pod "8f4cfc8a-e4d7-4579-b2cd-303abce60b03" (UID: "8f4cfc8a-e4d7-4579-b2cd-303abce60b03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.850597 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data" (OuterVolumeSpecName: "config-data") pod "8f4cfc8a-e4d7-4579-b2cd-303abce60b03" (UID: "8f4cfc8a-e4d7-4579-b2cd-303abce60b03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.853425 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f4cfc8a-e4d7-4579-b2cd-303abce60b03" (UID: "8f4cfc8a-e4d7-4579-b2cd-303abce60b03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.918366 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.918409 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.918419 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.918429 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.918439 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:24 crc kubenswrapper[4721]: I0128 19:00:24.918451 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k55n\" (UniqueName: \"kubernetes.io/projected/8f4cfc8a-e4d7-4579-b2cd-303abce60b03-kube-api-access-2k55n\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.044443 4721 generic.go:334] "Generic (PLEG): container finished" podID="8f4cfc8a-e4d7-4579-b2cd-303abce60b03" containerID="1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9" exitCode=0 Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.044503 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8f4cfc8a-e4d7-4579-b2cd-303abce60b03","Type":"ContainerDied","Data":"1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9"} Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.044827 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8f4cfc8a-e4d7-4579-b2cd-303abce60b03","Type":"ContainerDied","Data":"c36d8c2f060c52f18bdcda029d1cd79660047a05f936b784d45a9b942be53c1d"} Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.044532 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.044868 4721 scope.go:117] "RemoveContainer" containerID="1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.048786 4721 generic.go:334] "Generic (PLEG): container finished" podID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerID="2740c8bb07a2969fd701089a09fd1fe230cd0d121f6859930d10da8f91fe65c5" exitCode=0 Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.048849 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"73d83a88-618a-4208-aaa8-e209c0d34b1d","Type":"ContainerDied","Data":"2740c8bb07a2969fd701089a09fd1fe230cd0d121f6859930d10da8f91fe65c5"} Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.051544 4721 generic.go:334] "Generic (PLEG): container finished" podID="a2de6f20-e053-456e-860d-c85c1ae57874" containerID="3cc029803e37bb5c38e3dd284eafe6652c63b6de3bd6b70b40cba10275b90a7d" exitCode=0 Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.051605 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" event={"ID":"a2de6f20-e053-456e-860d-c85c1ae57874","Type":"ContainerDied","Data":"3cc029803e37bb5c38e3dd284eafe6652c63b6de3bd6b70b40cba10275b90a7d"} Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.077713 4721 scope.go:117] "RemoveContainer" containerID="1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9" Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.078850 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9\": container with ID starting with 1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9 not found: ID does not exist" containerID="1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.078888 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9"} err="failed to get container status \"1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9\": rpc error: code = NotFound desc = could not find container \"1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9\": container with ID starting with 1f663c893d71ca662dd01efe362379a0bc78bb39e83a2b263110dc5c2ac0cbc9 not found: ID does not exist" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.247248 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.303258 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.332106 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4cfc8a_e4d7_4579_b2cd_303abce60b03.slice/crio-c36d8c2f060c52f18bdcda029d1cd79660047a05f936b784d45a9b942be53c1d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4cfc8a_e4d7_4579_b2cd_303abce60b03.slice\": RecentStats: unable to find data in memory cache]" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340087 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.340831 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="extract-content" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340853 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="extract-content" Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.340880 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerName="dnsmasq-dns" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340886 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerName="dnsmasq-dns" Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.340903 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerName="init" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340909 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerName="init" Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.340919 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="registry-server" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340924 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="registry-server" Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.340945 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="extract-utilities" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340951 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="extract-utilities" Jan 28 19:00:25 crc kubenswrapper[4721]: E0128 19:00:25.340967 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4cfc8a-e4d7-4579-b2cd-303abce60b03" containerName="cloudkitty-proc" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.340972 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4cfc8a-e4d7-4579-b2cd-303abce60b03" containerName="cloudkitty-proc" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.341162 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" containerName="dnsmasq-dns" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.341192 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4cfc8a-e4d7-4579-b2cd-303abce60b03" containerName="cloudkitty-proc" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.341217 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" containerName="registry-server" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.342122 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.356332 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.367647 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.439063 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.439135 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-config-data\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.441709 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-scripts\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.441814 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btc97\" (UniqueName: \"kubernetes.io/projected/52682601-9d4b-4b45-a1e0-7143e9a31e7a-kube-api-access-btc97\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.441876 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.441908 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52682601-9d4b-4b45-a1e0-7143e9a31e7a-certs\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.557113 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.557242 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-config-data\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.557542 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-scripts\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.557615 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btc97\" (UniqueName: \"kubernetes.io/projected/52682601-9d4b-4b45-a1e0-7143e9a31e7a-kube-api-access-btc97\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.557668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.557688 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52682601-9d4b-4b45-a1e0-7143e9a31e7a-certs\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.565990 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/52682601-9d4b-4b45-a1e0-7143e9a31e7a-certs\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.580365 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.583227 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-config-data\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.586187 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-scripts\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.588898 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52682601-9d4b-4b45-a1e0-7143e9a31e7a-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.595904 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc30432-0868-448c-b124-8b9db2d2a6b2" path="/var/lib/kubelet/pods/4bc30432-0868-448c-b124-8b9db2d2a6b2/volumes" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.597019 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e5f7b3-cd41-40d9-ab6c-e90cff64e601" path="/var/lib/kubelet/pods/71e5f7b3-cd41-40d9-ab6c-e90cff64e601/volumes" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.598412 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4cfc8a-e4d7-4579-b2cd-303abce60b03" path="/var/lib/kubelet/pods/8f4cfc8a-e4d7-4579-b2cd-303abce60b03/volumes" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.612296 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btc97\" (UniqueName: \"kubernetes.io/projected/52682601-9d4b-4b45-a1e0-7143e9a31e7a-kube-api-access-btc97\") pod \"cloudkitty-proc-0\" (UID: \"52682601-9d4b-4b45-a1e0-7143e9a31e7a\") " pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.695300 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.839271 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.973942 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-certs\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974117 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-internal-tls-certs\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974202 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-scripts\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974389 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d83a88-618a-4208-aaa8-e209c0d34b1d-logs\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974526 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-public-tls-certs\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974594 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974619 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwv2\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-kube-api-access-zbwv2\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974698 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data-custom\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.974729 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-combined-ca-bundle\") pod \"73d83a88-618a-4208-aaa8-e209c0d34b1d\" (UID: \"73d83a88-618a-4208-aaa8-e209c0d34b1d\") " Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.979186 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d83a88-618a-4208-aaa8-e209c0d34b1d-logs" (OuterVolumeSpecName: "logs") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.981441 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-certs" (OuterVolumeSpecName: "certs") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.983362 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-scripts" (OuterVolumeSpecName: "scripts") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.990154 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:25 crc kubenswrapper[4721]: I0128 19:00:25.993311 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-kube-api-access-zbwv2" (OuterVolumeSpecName: "kube-api-access-zbwv2") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "kube-api-access-zbwv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.013566 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data" (OuterVolumeSpecName: "config-data") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.034252 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.052430 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.074400 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "73d83a88-618a-4208-aaa8-e209c0d34b1d" (UID: "73d83a88-618a-4208-aaa8-e209c0d34b1d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.076464 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" event={"ID":"a2de6f20-e053-456e-860d-c85c1ae57874","Type":"ContainerStarted","Data":"367e0b238796e4415b51411367a440fae833ae8dbc8aa6dc965d3800b6603c63"} Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.076618 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077660 4721 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077699 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077712 4721 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077725 4721 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077735 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077746 4721 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d83a88-618a-4208-aaa8-e209c0d34b1d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077758 4721 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077769 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d83a88-618a-4208-aaa8-e209c0d34b1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.077780 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbwv2\" (UniqueName: \"kubernetes.io/projected/73d83a88-618a-4208-aaa8-e209c0d34b1d-kube-api-access-zbwv2\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.087264 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"73d83a88-618a-4208-aaa8-e209c0d34b1d","Type":"ContainerDied","Data":"9f01753ea4d68bfdd9a83b588e32dd7197acf5914150aedd713831c20855cfbc"} Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.087334 4721 scope.go:117] "RemoveContainer" containerID="2740c8bb07a2969fd701089a09fd1fe230cd0d121f6859930d10da8f91fe65c5" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.087553 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.109477 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" podStartSLOduration=4.109452056 podStartE2EDuration="4.109452056s" podCreationTimestamp="2026-01-28 19:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:26.101614247 +0000 UTC m=+1591.826919827" watchObservedRunningTime="2026-01-28 19:00:26.109452056 +0000 UTC m=+1591.834757616" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.134741 4721 scope.go:117] "RemoveContainer" containerID="df25b2d57cc0161105ae6bcc96fc2e8c0455ecf5c000f5b78c47a2ffc805591e" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.140848 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.161381 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.176798 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 19:00:26 crc kubenswrapper[4721]: E0128 19:00:26.177531 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api-log" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.177554 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api-log" Jan 28 19:00:26 crc kubenswrapper[4721]: E0128 19:00:26.177589 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.177597 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.177825 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api-log" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.177858 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" containerName="cloudkitty-api" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.179550 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.183412 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.183890 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.185866 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.198314 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.218305 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.287257 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-scripts\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.287583 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.287766 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-logs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.288086 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-config-data\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.288189 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.288233 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.288296 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.288344 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.288363 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7dc\" (UniqueName: \"kubernetes.io/projected/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-kube-api-access-qj7dc\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.390685 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-config-data\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.390825 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.390897 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391020 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391092 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391123 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj7dc\" (UniqueName: \"kubernetes.io/projected/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-kube-api-access-qj7dc\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391207 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-scripts\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391366 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-logs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.391906 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-logs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.396573 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-config-data\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.396693 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.396935 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-scripts\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.397472 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.397797 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.400636 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.402516 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.410154 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj7dc\" (UniqueName: \"kubernetes.io/projected/b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd-kube-api-access-qj7dc\") pod \"cloudkitty-api-0\" (UID: \"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd\") " pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.508711 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 28 19:00:26 crc kubenswrapper[4721]: I0128 19:00:26.668534 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.239:3000/\": dial tcp 10.217.0.239:3000: connect: connection refused" Jan 28 19:00:27 crc kubenswrapper[4721]: I0128 19:00:27.048448 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 28 19:00:27 crc kubenswrapper[4721]: I0128 19:00:27.103666 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"52682601-9d4b-4b45-a1e0-7143e9a31e7a","Type":"ContainerStarted","Data":"8a7e9a3a2d3f02e35ae3e195dbe4dbd34fcc55ad06893bb126c08b2c957fdf82"} Jan 28 19:00:27 crc kubenswrapper[4721]: W0128 19:00:27.109327 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b4c3c6_7e93_4ea6_878c_7c7bce6768fd.slice/crio-89a811b619cba8ca7a7d4550b0002ca543eec9ac58883d36cf3cdb944609dda9 WatchSource:0}: Error finding container 89a811b619cba8ca7a7d4550b0002ca543eec9ac58883d36cf3cdb944609dda9: Status 404 returned error can't find the container with id 89a811b619cba8ca7a7d4550b0002ca543eec9ac58883d36cf3cdb944609dda9 Jan 28 19:00:27 crc kubenswrapper[4721]: I0128 19:00:27.542251 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d83a88-618a-4208-aaa8-e209c0d34b1d" path="/var/lib/kubelet/pods/73d83a88-618a-4208-aaa8-e209c0d34b1d/volumes" Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.117778 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"52682601-9d4b-4b45-a1e0-7143e9a31e7a","Type":"ContainerStarted","Data":"47b8ed31630add658ee9770c66574d7ea03ad313e0929ecf1fd1cb3a22021599"} Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.121051 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd","Type":"ContainerStarted","Data":"63736249af8dc8fb2b3ad2239e7ee8010ec21d48d60bb0d650f6c8a9bb1dc81e"} Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.121081 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd","Type":"ContainerStarted","Data":"2b187241a06ce6b0190f9f71091152ce3699ec3f356e1108141ffa7751c1e083"} Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.121090 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd","Type":"ContainerStarted","Data":"89a811b619cba8ca7a7d4550b0002ca543eec9ac58883d36cf3cdb944609dda9"} Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.121380 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.150472 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.2104168299999998 podStartE2EDuration="3.150446707s" podCreationTimestamp="2026-01-28 19:00:25 +0000 UTC" firstStartedPulling="2026-01-28 19:00:26.207828675 +0000 UTC m=+1591.933134235" lastFinishedPulling="2026-01-28 19:00:27.147858552 +0000 UTC m=+1592.873164112" observedRunningTime="2026-01-28 19:00:28.138362584 +0000 UTC m=+1593.863668144" watchObservedRunningTime="2026-01-28 19:00:28.150446707 +0000 UTC m=+1593.875752267" Jan 28 19:00:28 crc kubenswrapper[4721]: I0128 19:00:28.173026 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.172995784 podStartE2EDuration="2.172995784s" podCreationTimestamp="2026-01-28 19:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:28.156639464 +0000 UTC m=+1593.881945024" watchObservedRunningTime="2026-01-28 19:00:28.172995784 +0000 UTC m=+1593.898301344" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.041720 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.108319 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6s42"] Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.167869 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k6s42" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="registry-server" containerID="cri-o://010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee" gracePeriod=2 Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.754427 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.835502 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-utilities\") pod \"bccc5c45-eb45-452f-8f40-9e83893bf636\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.835635 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-catalog-content\") pod \"bccc5c45-eb45-452f-8f40-9e83893bf636\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.835736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trmbp\" (UniqueName: \"kubernetes.io/projected/bccc5c45-eb45-452f-8f40-9e83893bf636-kube-api-access-trmbp\") pod \"bccc5c45-eb45-452f-8f40-9e83893bf636\" (UID: \"bccc5c45-eb45-452f-8f40-9e83893bf636\") " Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.836814 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-utilities" (OuterVolumeSpecName: "utilities") pod "bccc5c45-eb45-452f-8f40-9e83893bf636" (UID: "bccc5c45-eb45-452f-8f40-9e83893bf636"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.842308 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccc5c45-eb45-452f-8f40-9e83893bf636-kube-api-access-trmbp" (OuterVolumeSpecName: "kube-api-access-trmbp") pod "bccc5c45-eb45-452f-8f40-9e83893bf636" (UID: "bccc5c45-eb45-452f-8f40-9e83893bf636"). InnerVolumeSpecName "kube-api-access-trmbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.867665 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bccc5c45-eb45-452f-8f40-9e83893bf636" (UID: "bccc5c45-eb45-452f-8f40-9e83893bf636"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.940269 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.940322 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccc5c45-eb45-452f-8f40-9e83893bf636-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:31 crc kubenswrapper[4721]: I0128 19:00:31.940384 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trmbp\" (UniqueName: \"kubernetes.io/projected/bccc5c45-eb45-452f-8f40-9e83893bf636-kube-api-access-trmbp\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.179366 4721 generic.go:334] "Generic (PLEG): container finished" podID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerID="010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee" exitCode=0 Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.179434 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerDied","Data":"010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee"} Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.180702 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6s42" event={"ID":"bccc5c45-eb45-452f-8f40-9e83893bf636","Type":"ContainerDied","Data":"44bf25580943b3acfeba90fd50acc7d1b1cf5e5fb605ff1b141fe10bf18a8927"} Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.180854 4721 scope.go:117] "RemoveContainer" containerID="010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.179470 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6s42" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.216002 4721 scope.go:117] "RemoveContainer" containerID="f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.224630 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6s42"] Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.237148 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6s42"] Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.252327 4721 scope.go:117] "RemoveContainer" containerID="3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.289813 4721 scope.go:117] "RemoveContainer" containerID="010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee" Jan 28 19:00:32 crc kubenswrapper[4721]: E0128 19:00:32.290735 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee\": container with ID starting with 010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee not found: ID does not exist" containerID="010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.290771 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee"} err="failed to get container status \"010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee\": rpc error: code = NotFound desc = could not find container \"010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee\": container with ID starting with 010076448a9766f398d64f43fd203960265fd0710f916b581e2d662adcd3cbee not found: ID does not exist" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.290796 4721 scope.go:117] "RemoveContainer" containerID="f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62" Jan 28 19:00:32 crc kubenswrapper[4721]: E0128 19:00:32.291127 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62\": container with ID starting with f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62 not found: ID does not exist" containerID="f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.291150 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62"} err="failed to get container status \"f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62\": rpc error: code = NotFound desc = could not find container \"f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62\": container with ID starting with f3cbe18a44fbdfa1cf56d18f36e22e86129ba378c3ee8f87edde19e1582bda62 not found: ID does not exist" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.291163 4721 scope.go:117] "RemoveContainer" containerID="3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b" Jan 28 19:00:32 crc kubenswrapper[4721]: E0128 19:00:32.291447 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b\": container with ID starting with 3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b not found: ID does not exist" containerID="3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b" Jan 28 19:00:32 crc kubenswrapper[4721]: I0128 19:00:32.291469 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b"} err="failed to get container status \"3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b\": rpc error: code = NotFound desc = could not find container \"3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b\": container with ID starting with 3d084618c849f26355eec03ef03f82e7da14e8fa892d4b2052eb377828fd9b2b not found: ID does not exist" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.033210 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-zg2ch" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.110980 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-jgvr4"] Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.111330 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerName="dnsmasq-dns" containerID="cri-o://69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c" gracePeriod=10 Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.555089 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" path="/var/lib/kubelet/pods/bccc5c45-eb45-452f-8f40-9e83893bf636/volumes" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.671789 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789184 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-sb\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789585 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-config\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789629 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-swift-storage-0\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789683 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-nb\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789734 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-openstack-edpm-ipam\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789789 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5pxc\" (UniqueName: \"kubernetes.io/projected/48a806c6-cce7-47d8-83c7-dae682f2e80f-kube-api-access-s5pxc\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.789859 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-svc\") pod \"48a806c6-cce7-47d8-83c7-dae682f2e80f\" (UID: \"48a806c6-cce7-47d8-83c7-dae682f2e80f\") " Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.797515 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a806c6-cce7-47d8-83c7-dae682f2e80f-kube-api-access-s5pxc" (OuterVolumeSpecName: "kube-api-access-s5pxc") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "kube-api-access-s5pxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.852040 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.852695 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.854712 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-config" (OuterVolumeSpecName: "config") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.858281 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.858641 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.861920 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48a806c6-cce7-47d8-83c7-dae682f2e80f" (UID: "48a806c6-cce7-47d8-83c7-dae682f2e80f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892716 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892758 4721 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-config\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892771 4721 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892784 4721 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892793 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892802 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5pxc\" (UniqueName: \"kubernetes.io/projected/48a806c6-cce7-47d8-83c7-dae682f2e80f-kube-api-access-s5pxc\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:33 crc kubenswrapper[4721]: I0128 19:00:33.892814 4721 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48a806c6-cce7-47d8-83c7-dae682f2e80f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.210616 4721 generic.go:334] "Generic (PLEG): container finished" podID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerID="69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c" exitCode=0 Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.210681 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" event={"ID":"48a806c6-cce7-47d8-83c7-dae682f2e80f","Type":"ContainerDied","Data":"69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c"} Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.210722 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" event={"ID":"48a806c6-cce7-47d8-83c7-dae682f2e80f","Type":"ContainerDied","Data":"bc0e20f60e62ce58d6f535d73aafff3b16eec0d1191ffd65148e6b3109299418"} Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.210744 4721 scope.go:117] "RemoveContainer" containerID="69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.210689 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-jgvr4" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.246227 4721 scope.go:117] "RemoveContainer" containerID="209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.250343 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-jgvr4"] Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.261761 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-jgvr4"] Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.276106 4721 scope.go:117] "RemoveContainer" containerID="69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c" Jan 28 19:00:34 crc kubenswrapper[4721]: E0128 19:00:34.276594 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c\": container with ID starting with 69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c not found: ID does not exist" containerID="69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.276644 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c"} err="failed to get container status \"69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c\": rpc error: code = NotFound desc = could not find container \"69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c\": container with ID starting with 69a243a95d079f02e0924ca367946cfe8bc30a0d6d1f983bb396edcef089d29c not found: ID does not exist" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.276679 4721 scope.go:117] "RemoveContainer" containerID="209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c" Jan 28 19:00:34 crc kubenswrapper[4721]: E0128 19:00:34.277050 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c\": container with ID starting with 209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c not found: ID does not exist" containerID="209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c" Jan 28 19:00:34 crc kubenswrapper[4721]: I0128 19:00:34.277108 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c"} err="failed to get container status \"209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c\": rpc error: code = NotFound desc = could not find container \"209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c\": container with ID starting with 209a01bab6e1375cbb4a601e9c378f65ae92485ee60fd6392617d94cfdc5884c not found: ID does not exist" Jan 28 19:00:35 crc kubenswrapper[4721]: I0128 19:00:35.538386 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:00:35 crc kubenswrapper[4721]: E0128 19:00:35.539186 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:00:35 crc kubenswrapper[4721]: I0128 19:00:35.541789 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" path="/var/lib/kubelet/pods/48a806c6-cce7-47d8-83c7-dae682f2e80f/volumes" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.070515 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.257267 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-run-httpd\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.257743 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-ceilometer-tls-certs\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.257868 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-combined-ca-bundle\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.257972 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-config-data\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.258006 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-scripts\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.258031 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.258077 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzjb\" (UniqueName: \"kubernetes.io/projected/026c3758-a794-4177-9412-8af411eeba01-kube-api-access-2jzjb\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.258155 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-sg-core-conf-yaml\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.258248 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-log-httpd\") pod \"026c3758-a794-4177-9412-8af411eeba01\" (UID: \"026c3758-a794-4177-9412-8af411eeba01\") " Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.258709 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.259164 4721 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.259211 4721 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/026c3758-a794-4177-9412-8af411eeba01-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.267585 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026c3758-a794-4177-9412-8af411eeba01-kube-api-access-2jzjb" (OuterVolumeSpecName: "kube-api-access-2jzjb") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "kube-api-access-2jzjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.267593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-scripts" (OuterVolumeSpecName: "scripts") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.297983 4721 generic.go:334] "Generic (PLEG): container finished" podID="026c3758-a794-4177-9412-8af411eeba01" containerID="edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4" exitCode=137 Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.298044 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerDied","Data":"edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4"} Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.298078 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"026c3758-a794-4177-9412-8af411eeba01","Type":"ContainerDied","Data":"9ce6923df7b62816846cfdfc549cc961ba572b05b70d631f497490bb20b1b9a7"} Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.298102 4721 scope.go:117] "RemoveContainer" containerID="a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.298111 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.307310 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.335074 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.360587 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jzjb\" (UniqueName: \"kubernetes.io/projected/026c3758-a794-4177-9412-8af411eeba01-kube-api-access-2jzjb\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.360626 4721 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.360640 4721 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.360649 4721 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.361074 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.393311 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-config-data" (OuterVolumeSpecName: "config-data") pod "026c3758-a794-4177-9412-8af411eeba01" (UID: "026c3758-a794-4177-9412-8af411eeba01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.421684 4721 scope.go:117] "RemoveContainer" containerID="3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.447668 4721 scope.go:117] "RemoveContainer" containerID="081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.462727 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.462769 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/026c3758-a794-4177-9412-8af411eeba01-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.474230 4721 scope.go:117] "RemoveContainer" containerID="edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.502643 4721 scope.go:117] "RemoveContainer" containerID="a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.503050 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1\": container with ID starting with a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1 not found: ID does not exist" containerID="a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.503101 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1"} err="failed to get container status \"a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1\": rpc error: code = NotFound desc = could not find container \"a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1\": container with ID starting with a0fceffb251cff73d060baddd2bd37631b84fa83d14b79e6a8821fedca99bde1 not found: ID does not exist" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.503136 4721 scope.go:117] "RemoveContainer" containerID="3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.503383 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18\": container with ID starting with 3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18 not found: ID does not exist" containerID="3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.503400 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18"} err="failed to get container status \"3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18\": rpc error: code = NotFound desc = could not find container \"3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18\": container with ID starting with 3fa65a97224ff0da376aec4fd1c26f6f0cb401fc05e7ba5ccf92d907c8bf5d18 not found: ID does not exist" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.503413 4721 scope.go:117] "RemoveContainer" containerID="081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.503651 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f\": container with ID starting with 081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f not found: ID does not exist" containerID="081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.503665 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f"} err="failed to get container status \"081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f\": rpc error: code = NotFound desc = could not find container \"081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f\": container with ID starting with 081493259a167422ec90175454310eea963dca1a797b8a67f2ab4e25e5a5ae9f not found: ID does not exist" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.503682 4721 scope.go:117] "RemoveContainer" containerID="edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.504039 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4\": container with ID starting with edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4 not found: ID does not exist" containerID="edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.504064 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4"} err="failed to get container status \"edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4\": rpc error: code = NotFound desc = could not find container \"edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4\": container with ID starting with edd90f25b0bb54ea10a0f978a515d4838a23b7f05a5bb3ee32e4d06a5d87ccc4 not found: ID does not exist" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.627949 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.649244 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.678873 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679566 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerName="dnsmasq-dns" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679593 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerName="dnsmasq-dns" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679607 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="registry-server" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679614 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="registry-server" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679631 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerName="init" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679638 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerName="init" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679653 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="extract-content" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679660 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="extract-content" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679686 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-notification-agent" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679693 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-notification-agent" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679704 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-central-agent" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679712 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-central-agent" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679727 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="extract-utilities" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679736 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="extract-utilities" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679755 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="proxy-httpd" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679767 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="proxy-httpd" Jan 28 19:00:41 crc kubenswrapper[4721]: E0128 19:00:41.679791 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="sg-core" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.679799 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="sg-core" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.680036 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="sg-core" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.680055 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccc5c45-eb45-452f-8f40-9e83893bf636" containerName="registry-server" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.680072 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-notification-agent" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.680091 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a806c6-cce7-47d8-83c7-dae682f2e80f" containerName="dnsmasq-dns" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.680113 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="ceilometer-central-agent" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.680129 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="026c3758-a794-4177-9412-8af411eeba01" containerName="proxy-httpd" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.686454 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.693392 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.693708 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.694513 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.709762 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.830621 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp"] Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.833756 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.837530 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.839774 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.839947 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.840079 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.851295 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp"] Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.871855 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.871907 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.871960 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92164365-9f87-4c26-b4c9-9d212e4aa1e1-log-httpd\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.872072 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-scripts\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.872131 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pchf\" (UniqueName: \"kubernetes.io/projected/92164365-9f87-4c26-b4c9-9d212e4aa1e1-kube-api-access-7pchf\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.872158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92164365-9f87-4c26-b4c9-9d212e4aa1e1-run-httpd\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.872192 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.872229 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-config-data\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974374 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92164365-9f87-4c26-b4c9-9d212e4aa1e1-run-httpd\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974436 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974476 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974510 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-config-data\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974552 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974578 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974598 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974639 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jb9q\" (UniqueName: \"kubernetes.io/projected/6962dcfe-fe79-48fd-af49-7b4c644856d9-kube-api-access-8jb9q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974670 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92164365-9f87-4c26-b4c9-9d212e4aa1e1-log-httpd\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974700 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974761 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-scripts\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974815 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pchf\" (UniqueName: \"kubernetes.io/projected/92164365-9f87-4c26-b4c9-9d212e4aa1e1-kube-api-access-7pchf\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.974919 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92164365-9f87-4c26-b4c9-9d212e4aa1e1-run-httpd\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.975451 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92164365-9f87-4c26-b4c9-9d212e4aa1e1-log-httpd\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.981013 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.981334 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.982037 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.983250 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-scripts\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.987106 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92164365-9f87-4c26-b4c9-9d212e4aa1e1-config-data\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:41 crc kubenswrapper[4721]: I0128 19:00:41.994531 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pchf\" (UniqueName: \"kubernetes.io/projected/92164365-9f87-4c26-b4c9-9d212e4aa1e1-kube-api-access-7pchf\") pod \"ceilometer-0\" (UID: \"92164365-9f87-4c26-b4c9-9d212e4aa1e1\") " pod="openstack/ceilometer-0" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.021987 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.077287 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jb9q\" (UniqueName: \"kubernetes.io/projected/6962dcfe-fe79-48fd-af49-7b4c644856d9-kube-api-access-8jb9q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.077357 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.077507 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.077564 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.083876 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.091803 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.092975 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.107454 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jb9q\" (UniqueName: \"kubernetes.io/projected/6962dcfe-fe79-48fd-af49-7b4c644856d9-kube-api-access-8jb9q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.159684 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:00:42 crc kubenswrapper[4721]: W0128 19:00:42.547912 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92164365_9f87_4c26_b4c9_9d212e4aa1e1.slice/crio-45ccd97afa5095dbbc2abcd6d398276f22e06458b63155d0e449935aa93c18c3 WatchSource:0}: Error finding container 45ccd97afa5095dbbc2abcd6d398276f22e06458b63155d0e449935aa93c18c3: Status 404 returned error can't find the container with id 45ccd97afa5095dbbc2abcd6d398276f22e06458b63155d0e449935aa93c18c3 Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.553195 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 19:00:42 crc kubenswrapper[4721]: I0128 19:00:42.784221 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp"] Jan 28 19:00:42 crc kubenswrapper[4721]: W0128 19:00:42.794901 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6962dcfe_fe79_48fd_af49_7b4c644856d9.slice/crio-ad11d61af5d58e154e0ad5e064c460cea94b4203d032f1a234114cf829916eb0 WatchSource:0}: Error finding container ad11d61af5d58e154e0ad5e064c460cea94b4203d032f1a234114cf829916eb0: Status 404 returned error can't find the container with id ad11d61af5d58e154e0ad5e064c460cea94b4203d032f1a234114cf829916eb0 Jan 28 19:00:43 crc kubenswrapper[4721]: I0128 19:00:43.333870 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" event={"ID":"6962dcfe-fe79-48fd-af49-7b4c644856d9","Type":"ContainerStarted","Data":"ad11d61af5d58e154e0ad5e064c460cea94b4203d032f1a234114cf829916eb0"} Jan 28 19:00:43 crc kubenswrapper[4721]: I0128 19:00:43.336913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92164365-9f87-4c26-b4c9-9d212e4aa1e1","Type":"ContainerStarted","Data":"45ccd97afa5095dbbc2abcd6d398276f22e06458b63155d0e449935aa93c18c3"} Jan 28 19:00:43 crc kubenswrapper[4721]: I0128 19:00:43.544687 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026c3758-a794-4177-9412-8af411eeba01" path="/var/lib/kubelet/pods/026c3758-a794-4177-9412-8af411eeba01/volumes" Jan 28 19:00:46 crc kubenswrapper[4721]: I0128 19:00:46.371564 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92164365-9f87-4c26-b4c9-9d212e4aa1e1","Type":"ContainerStarted","Data":"5d7d78bd9920f148cfcbf467ae0a765ac3c2e368fc3d6bcd4fbfdfc09e1fb666"} Jan 28 19:00:47 crc kubenswrapper[4721]: I0128 19:00:47.528625 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:00:47 crc kubenswrapper[4721]: E0128 19:00:47.529311 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:00:48 crc kubenswrapper[4721]: I0128 19:00:48.408740 4721 generic.go:334] "Generic (PLEG): container finished" podID="a493b27e-e634-4b09-ae05-2a69c5ad0d68" containerID="5bb8b16913fae619aa4c67b8f79f8e2acfba14af834cde80a4947bf0e9b8b398" exitCode=0 Jan 28 19:00:48 crc kubenswrapper[4721]: I0128 19:00:48.408819 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a493b27e-e634-4b09-ae05-2a69c5ad0d68","Type":"ContainerDied","Data":"5bb8b16913fae619aa4c67b8f79f8e2acfba14af834cde80a4947bf0e9b8b398"} Jan 28 19:00:48 crc kubenswrapper[4721]: I0128 19:00:48.412249 4721 generic.go:334] "Generic (PLEG): container finished" podID="88f1129c-54fc-423a-993d-560aecdd75eb" containerID="4f892c8855c5c43cfd71e18a26303a0e5dc6bb57ccd7326172b9108ac9c15cb3" exitCode=0 Jan 28 19:00:48 crc kubenswrapper[4721]: I0128 19:00:48.412641 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"88f1129c-54fc-423a-993d-560aecdd75eb","Type":"ContainerDied","Data":"4f892c8855c5c43cfd71e18a26303a0e5dc6bb57ccd7326172b9108ac9c15cb3"} Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.493728 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"88f1129c-54fc-423a-993d-560aecdd75eb","Type":"ContainerStarted","Data":"d4b96dbe586ad7fdb47dbee9231233a7b0d0eebc1b10e03cd00c1ce9c35db246"} Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.496710 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.502601 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92164365-9f87-4c26-b4c9-9d212e4aa1e1","Type":"ContainerStarted","Data":"f2eba4d31433bd07908f53b75f90a44415cd9152c324b9a8801a5f9de86436b9"} Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.507635 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a493b27e-e634-4b09-ae05-2a69c5ad0d68","Type":"ContainerStarted","Data":"09a7b55724faa0462e7d0f3cbec652fbb1c5c45cde85a912a6c68c843345a323"} Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.511449 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" event={"ID":"6962dcfe-fe79-48fd-af49-7b4c644856d9","Type":"ContainerStarted","Data":"f286e87640bbacd9e9cfa1d2ccddf809599fcba76ecda25645c961ed685bc6a4"} Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.535390 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.535333713 podStartE2EDuration="40.535333713s" podCreationTimestamp="2026-01-28 19:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:52.529945752 +0000 UTC m=+1618.255251322" watchObservedRunningTime="2026-01-28 19:00:52.535333713 +0000 UTC m=+1618.260639273" Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.558273 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" podStartSLOduration=2.842597454 podStartE2EDuration="11.558241782s" podCreationTimestamp="2026-01-28 19:00:41 +0000 UTC" firstStartedPulling="2026-01-28 19:00:42.798457736 +0000 UTC m=+1608.523763296" lastFinishedPulling="2026-01-28 19:00:51.514102064 +0000 UTC m=+1617.239407624" observedRunningTime="2026-01-28 19:00:52.552479459 +0000 UTC m=+1618.277785019" watchObservedRunningTime="2026-01-28 19:00:52.558241782 +0000 UTC m=+1618.283547342" Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.587390 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.587367369 podStartE2EDuration="40.587367369s" podCreationTimestamp="2026-01-28 19:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:52.583885888 +0000 UTC m=+1618.309191448" watchObservedRunningTime="2026-01-28 19:00:52.587367369 +0000 UTC m=+1618.312672929" Jan 28 19:00:52 crc kubenswrapper[4721]: I0128 19:00:52.738393 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:00:53 crc kubenswrapper[4721]: I0128 19:00:53.527076 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92164365-9f87-4c26-b4c9-9d212e4aa1e1","Type":"ContainerStarted","Data":"d917b6d225fead4b9fa289ac8723c9fe0b60947a08928001c2a9d1949027e848"} Jan 28 19:00:55 crc kubenswrapper[4721]: I0128 19:00:55.580180 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"92164365-9f87-4c26-b4c9-9d212e4aa1e1","Type":"ContainerStarted","Data":"b4210006624071a4ebc10b0ef9658ca73fda8254a4731002709b2d96f1c2e0b6"} Jan 28 19:00:55 crc kubenswrapper[4721]: I0128 19:00:55.582581 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 19:00:55 crc kubenswrapper[4721]: I0128 19:00:55.612617 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6661309600000003 podStartE2EDuration="14.612595843s" podCreationTimestamp="2026-01-28 19:00:41 +0000 UTC" firstStartedPulling="2026-01-28 19:00:42.550729568 +0000 UTC m=+1608.276035128" lastFinishedPulling="2026-01-28 19:00:54.497194461 +0000 UTC m=+1620.222500011" observedRunningTime="2026-01-28 19:00:55.608868621 +0000 UTC m=+1621.334174181" watchObservedRunningTime="2026-01-28 19:00:55.612595843 +0000 UTC m=+1621.337901403" Jan 28 19:00:59 crc kubenswrapper[4721]: I0128 19:00:59.529939 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:00:59 crc kubenswrapper[4721]: E0128 19:00:59.530942 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.165328 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493781-lgwjg"] Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.166906 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.182572 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493781-lgwjg"] Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.285964 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-combined-ca-bundle\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.286083 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6pkj\" (UniqueName: \"kubernetes.io/projected/16b77be6-6887-4534-a5e9-fc53746e8bde-kube-api-access-l6pkj\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.286122 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-fernet-keys\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.286199 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-config-data\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.389776 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-combined-ca-bundle\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.390027 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6pkj\" (UniqueName: \"kubernetes.io/projected/16b77be6-6887-4534-a5e9-fc53746e8bde-kube-api-access-l6pkj\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.390056 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-fernet-keys\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.391517 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-config-data\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.396830 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-fernet-keys\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.398285 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-config-data\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.399410 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-combined-ca-bundle\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.418789 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6pkj\" (UniqueName: \"kubernetes.io/projected/16b77be6-6887-4534-a5e9-fc53746e8bde-kube-api-access-l6pkj\") pod \"keystone-cron-29493781-lgwjg\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:00 crc kubenswrapper[4721]: I0128 19:01:00.500742 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:02 crc kubenswrapper[4721]: I0128 19:01:02.739448 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a493b27e-e634-4b09-ae05-2a69c5ad0d68" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.244:5671: connect: connection refused" Jan 28 19:01:02 crc kubenswrapper[4721]: I0128 19:01:02.757765 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493781-lgwjg"] Jan 28 19:01:03 crc kubenswrapper[4721]: I0128 19:01:03.282233 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="88f1129c-54fc-423a-993d-560aecdd75eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5671: connect: connection refused" Jan 28 19:01:03 crc kubenswrapper[4721]: I0128 19:01:03.679458 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-lgwjg" event={"ID":"16b77be6-6887-4534-a5e9-fc53746e8bde","Type":"ContainerStarted","Data":"c654245780f7ec3303e8664be26d590f4e112a04d6e2ae477b624136cfd854d3"} Jan 28 19:01:03 crc kubenswrapper[4721]: I0128 19:01:03.679531 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-lgwjg" event={"ID":"16b77be6-6887-4534-a5e9-fc53746e8bde","Type":"ContainerStarted","Data":"01b151d0f1721c62da05449d24ca6f2198df8430e2b5916df5422d409a872196"} Jan 28 19:01:03 crc kubenswrapper[4721]: I0128 19:01:03.718962 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493781-lgwjg" podStartSLOduration=3.718937182 podStartE2EDuration="3.718937182s" podCreationTimestamp="2026-01-28 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:01:03.709687749 +0000 UTC m=+1629.434993329" watchObservedRunningTime="2026-01-28 19:01:03.718937182 +0000 UTC m=+1629.444242742" Jan 28 19:01:03 crc kubenswrapper[4721]: I0128 19:01:03.772464 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 28 19:01:04 crc kubenswrapper[4721]: I0128 19:01:04.694480 4721 generic.go:334] "Generic (PLEG): container finished" podID="6962dcfe-fe79-48fd-af49-7b4c644856d9" containerID="f286e87640bbacd9e9cfa1d2ccddf809599fcba76ecda25645c961ed685bc6a4" exitCode=0 Jan 28 19:01:04 crc kubenswrapper[4721]: I0128 19:01:04.694608 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" event={"ID":"6962dcfe-fe79-48fd-af49-7b4c644856d9","Type":"ContainerDied","Data":"f286e87640bbacd9e9cfa1d2ccddf809599fcba76ecda25645c961ed685bc6a4"} Jan 28 19:01:05 crc kubenswrapper[4721]: I0128 19:01:05.709706 4721 generic.go:334] "Generic (PLEG): container finished" podID="16b77be6-6887-4534-a5e9-fc53746e8bde" containerID="c654245780f7ec3303e8664be26d590f4e112a04d6e2ae477b624136cfd854d3" exitCode=0 Jan 28 19:01:05 crc kubenswrapper[4721]: I0128 19:01:05.709879 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-lgwjg" event={"ID":"16b77be6-6887-4534-a5e9-fc53746e8bde","Type":"ContainerDied","Data":"c654245780f7ec3303e8664be26d590f4e112a04d6e2ae477b624136cfd854d3"} Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.267001 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.460289 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jb9q\" (UniqueName: \"kubernetes.io/projected/6962dcfe-fe79-48fd-af49-7b4c644856d9-kube-api-access-8jb9q\") pod \"6962dcfe-fe79-48fd-af49-7b4c644856d9\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.460526 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-inventory\") pod \"6962dcfe-fe79-48fd-af49-7b4c644856d9\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.460647 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-ssh-key-openstack-edpm-ipam\") pod \"6962dcfe-fe79-48fd-af49-7b4c644856d9\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.460716 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-repo-setup-combined-ca-bundle\") pod \"6962dcfe-fe79-48fd-af49-7b4c644856d9\" (UID: \"6962dcfe-fe79-48fd-af49-7b4c644856d9\") " Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.484413 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6962dcfe-fe79-48fd-af49-7b4c644856d9" (UID: "6962dcfe-fe79-48fd-af49-7b4c644856d9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.484677 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6962dcfe-fe79-48fd-af49-7b4c644856d9-kube-api-access-8jb9q" (OuterVolumeSpecName: "kube-api-access-8jb9q") pod "6962dcfe-fe79-48fd-af49-7b4c644856d9" (UID: "6962dcfe-fe79-48fd-af49-7b4c644856d9"). InnerVolumeSpecName "kube-api-access-8jb9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.499532 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6962dcfe-fe79-48fd-af49-7b4c644856d9" (UID: "6962dcfe-fe79-48fd-af49-7b4c644856d9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.522284 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-inventory" (OuterVolumeSpecName: "inventory") pod "6962dcfe-fe79-48fd-af49-7b4c644856d9" (UID: "6962dcfe-fe79-48fd-af49-7b4c644856d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.564314 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jb9q\" (UniqueName: \"kubernetes.io/projected/6962dcfe-fe79-48fd-af49-7b4c644856d9-kube-api-access-8jb9q\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.564365 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.564381 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.564396 4721 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6962dcfe-fe79-48fd-af49-7b4c644856d9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.721845 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.722017 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp" event={"ID":"6962dcfe-fe79-48fd-af49-7b4c644856d9","Type":"ContainerDied","Data":"ad11d61af5d58e154e0ad5e064c460cea94b4203d032f1a234114cf829916eb0"} Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.722066 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad11d61af5d58e154e0ad5e064c460cea94b4203d032f1a234114cf829916eb0" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.818989 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh"] Jan 28 19:01:06 crc kubenswrapper[4721]: E0128 19:01:06.821006 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6962dcfe-fe79-48fd-af49-7b4c644856d9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.821086 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6962dcfe-fe79-48fd-af49-7b4c644856d9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.821419 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6962dcfe-fe79-48fd-af49-7b4c644856d9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.822574 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.831018 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.831058 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.831375 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.831403 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.839200 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh"] Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.974673 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.974838 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:06 crc kubenswrapper[4721]: I0128 19:01:06.974948 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgspf\" (UniqueName: \"kubernetes.io/projected/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-kube-api-access-mgspf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.077288 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.077439 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.077522 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgspf\" (UniqueName: \"kubernetes.io/projected/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-kube-api-access-mgspf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.084284 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.086345 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.106852 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgspf\" (UniqueName: \"kubernetes.io/projected/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-kube-api-access-mgspf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fbxbh\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.146415 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.282291 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.383777 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6pkj\" (UniqueName: \"kubernetes.io/projected/16b77be6-6887-4534-a5e9-fc53746e8bde-kube-api-access-l6pkj\") pod \"16b77be6-6887-4534-a5e9-fc53746e8bde\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.383839 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-combined-ca-bundle\") pod \"16b77be6-6887-4534-a5e9-fc53746e8bde\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.383910 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-fernet-keys\") pod \"16b77be6-6887-4534-a5e9-fc53746e8bde\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.384138 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-config-data\") pod \"16b77be6-6887-4534-a5e9-fc53746e8bde\" (UID: \"16b77be6-6887-4534-a5e9-fc53746e8bde\") " Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.387402 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b77be6-6887-4534-a5e9-fc53746e8bde-kube-api-access-l6pkj" (OuterVolumeSpecName: "kube-api-access-l6pkj") pod "16b77be6-6887-4534-a5e9-fc53746e8bde" (UID: "16b77be6-6887-4534-a5e9-fc53746e8bde"). InnerVolumeSpecName "kube-api-access-l6pkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.387936 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "16b77be6-6887-4534-a5e9-fc53746e8bde" (UID: "16b77be6-6887-4534-a5e9-fc53746e8bde"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.429747 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16b77be6-6887-4534-a5e9-fc53746e8bde" (UID: "16b77be6-6887-4534-a5e9-fc53746e8bde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.475879 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-config-data" (OuterVolumeSpecName: "config-data") pod "16b77be6-6887-4534-a5e9-fc53746e8bde" (UID: "16b77be6-6887-4534-a5e9-fc53746e8bde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.488733 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6pkj\" (UniqueName: \"kubernetes.io/projected/16b77be6-6887-4534-a5e9-fc53746e8bde-kube-api-access-l6pkj\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.488777 4721 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.488821 4721 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.488835 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b77be6-6887-4534-a5e9-fc53746e8bde-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:07 crc kubenswrapper[4721]: W0128 19:01:07.710658 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a9cb018_b8e2_4f14_b146_2ad0b8c6f997.slice/crio-4b85297d936c3cedcaaf8a0fae087024aac2bcbd46f14b02da2d4f7169fe886d WatchSource:0}: Error finding container 4b85297d936c3cedcaaf8a0fae087024aac2bcbd46f14b02da2d4f7169fe886d: Status 404 returned error can't find the container with id 4b85297d936c3cedcaaf8a0fae087024aac2bcbd46f14b02da2d4f7169fe886d Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.714503 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.716701 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh"] Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.742378 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" event={"ID":"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997","Type":"ContainerStarted","Data":"4b85297d936c3cedcaaf8a0fae087024aac2bcbd46f14b02da2d4f7169fe886d"} Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.749826 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-lgwjg" event={"ID":"16b77be6-6887-4534-a5e9-fc53746e8bde","Type":"ContainerDied","Data":"01b151d0f1721c62da05449d24ca6f2198df8430e2b5916df5422d409a872196"} Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.749942 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-lgwjg" Jan 28 19:01:07 crc kubenswrapper[4721]: I0128 19:01:07.749977 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b151d0f1721c62da05449d24ca6f2198df8430e2b5916df5422d409a872196" Jan 28 19:01:08 crc kubenswrapper[4721]: I0128 19:01:08.763926 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" event={"ID":"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997","Type":"ContainerStarted","Data":"bba0337c671749b4c03837a7ee868d97efb72b558a1b0243c82f34b890a79a84"} Jan 28 19:01:08 crc kubenswrapper[4721]: I0128 19:01:08.786376 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" podStartSLOduration=2.076955767 podStartE2EDuration="2.786332762s" podCreationTimestamp="2026-01-28 19:01:06 +0000 UTC" firstStartedPulling="2026-01-28 19:01:07.714118688 +0000 UTC m=+1633.439424248" lastFinishedPulling="2026-01-28 19:01:08.423495683 +0000 UTC m=+1634.148801243" observedRunningTime="2026-01-28 19:01:08.78016139 +0000 UTC m=+1634.505466960" watchObservedRunningTime="2026-01-28 19:01:08.786332762 +0000 UTC m=+1634.511638322" Jan 28 19:01:11 crc kubenswrapper[4721]: I0128 19:01:11.798219 4721 generic.go:334] "Generic (PLEG): container finished" podID="2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" containerID="bba0337c671749b4c03837a7ee868d97efb72b558a1b0243c82f34b890a79a84" exitCode=0 Jan 28 19:01:11 crc kubenswrapper[4721]: I0128 19:01:11.798338 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" event={"ID":"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997","Type":"ContainerDied","Data":"bba0337c671749b4c03837a7ee868d97efb72b558a1b0243c82f34b890a79a84"} Jan 28 19:01:12 crc kubenswrapper[4721]: I0128 19:01:12.034875 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 19:01:12 crc kubenswrapper[4721]: I0128 19:01:12.741462 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.283367 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.474824 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.562325 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-inventory\") pod \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.562488 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgspf\" (UniqueName: \"kubernetes.io/projected/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-kube-api-access-mgspf\") pod \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.562812 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-ssh-key-openstack-edpm-ipam\") pod \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\" (UID: \"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997\") " Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.570417 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-kube-api-access-mgspf" (OuterVolumeSpecName: "kube-api-access-mgspf") pod "2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" (UID: "2a9cb018-b8e2-4f14-b146-2ad0b8c6f997"). InnerVolumeSpecName "kube-api-access-mgspf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.614435 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-inventory" (OuterVolumeSpecName: "inventory") pod "2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" (UID: "2a9cb018-b8e2-4f14-b146-2ad0b8c6f997"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.617805 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" (UID: "2a9cb018-b8e2-4f14-b146-2ad0b8c6f997"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.666428 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.666465 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.666480 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgspf\" (UniqueName: \"kubernetes.io/projected/2a9cb018-b8e2-4f14-b146-2ad0b8c6f997-kube-api-access-mgspf\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.822713 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" event={"ID":"2a9cb018-b8e2-4f14-b146-2ad0b8c6f997","Type":"ContainerDied","Data":"4b85297d936c3cedcaaf8a0fae087024aac2bcbd46f14b02da2d4f7169fe886d"} Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.822770 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b85297d936c3cedcaaf8a0fae087024aac2bcbd46f14b02da2d4f7169fe886d" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.822815 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fbxbh" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.898440 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887"] Jan 28 19:01:13 crc kubenswrapper[4721]: E0128 19:01:13.899051 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.899080 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 19:01:13 crc kubenswrapper[4721]: E0128 19:01:13.899118 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b77be6-6887-4534-a5e9-fc53746e8bde" containerName="keystone-cron" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.899128 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b77be6-6887-4534-a5e9-fc53746e8bde" containerName="keystone-cron" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.899414 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9cb018-b8e2-4f14-b146-2ad0b8c6f997" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.899431 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b77be6-6887-4534-a5e9-fc53746e8bde" containerName="keystone-cron" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.900507 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.902914 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.914676 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.914911 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.915006 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.923306 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887"] Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.975781 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.975869 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.975945 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:13 crc kubenswrapper[4721]: I0128 19:01:13.975994 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bktw\" (UniqueName: \"kubernetes.io/projected/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-kube-api-access-7bktw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.084229 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bktw\" (UniqueName: \"kubernetes.io/projected/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-kube-api-access-7bktw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.084427 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.084478 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.084560 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.088569 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.088996 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.102367 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.103732 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bktw\" (UniqueName: \"kubernetes.io/projected/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-kube-api-access-7bktw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sw887\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.226049 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.529796 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:01:14 crc kubenswrapper[4721]: E0128 19:01:14.531726 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:01:14 crc kubenswrapper[4721]: I0128 19:01:14.836552 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887"] Jan 28 19:01:15 crc kubenswrapper[4721]: I0128 19:01:15.846192 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" event={"ID":"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1","Type":"ContainerStarted","Data":"f3d362401b2d2e8718dd2a07ef113b7dd6f023b2a78e2992ef4936e19d82e6a7"} Jan 28 19:01:15 crc kubenswrapper[4721]: I0128 19:01:15.846721 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" event={"ID":"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1","Type":"ContainerStarted","Data":"4262d1c5c83719ba14cec77e3528a1473b5041f22328e281e41daf6cc4ada4de"} Jan 28 19:01:16 crc kubenswrapper[4721]: I0128 19:01:16.736059 4721 scope.go:117] "RemoveContainer" containerID="cfc627ad0fc78c84a9e728d559b682afa1e87bec17599343ae60c1a5843ca673" Jan 28 19:01:16 crc kubenswrapper[4721]: I0128 19:01:16.784630 4721 scope.go:117] "RemoveContainer" containerID="a4fe84c6a4aa9a1c38dc456aea3839d4b65cef37f826cd761a73edfe11338e19" Jan 28 19:01:16 crc kubenswrapper[4721]: I0128 19:01:16.821319 4721 scope.go:117] "RemoveContainer" containerID="4bb868c782027b9450929d28db2ba013267b613e7f87e574cbc9a843f19d54ac" Jan 28 19:01:26 crc kubenswrapper[4721]: I0128 19:01:26.529688 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:01:26 crc kubenswrapper[4721]: E0128 19:01:26.530699 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:01:38 crc kubenswrapper[4721]: I0128 19:01:38.528798 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:01:38 crc kubenswrapper[4721]: E0128 19:01:38.530134 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:01:53 crc kubenswrapper[4721]: I0128 19:01:53.535227 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:01:53 crc kubenswrapper[4721]: E0128 19:01:53.546858 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:02:08 crc kubenswrapper[4721]: I0128 19:02:08.529671 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:02:08 crc kubenswrapper[4721]: E0128 19:02:08.530608 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:02:17 crc kubenswrapper[4721]: I0128 19:02:17.095692 4721 scope.go:117] "RemoveContainer" containerID="e1d77b470ef972c00ece8bd31dd0f00d8bd0fecc4f5529a21075145a4929820f" Jan 28 19:02:19 crc kubenswrapper[4721]: I0128 19:02:19.529112 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:02:19 crc kubenswrapper[4721]: E0128 19:02:19.529705 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:02:33 crc kubenswrapper[4721]: I0128 19:02:33.529562 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:02:33 crc kubenswrapper[4721]: E0128 19:02:33.530635 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:02:47 crc kubenswrapper[4721]: I0128 19:02:47.530721 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:02:47 crc kubenswrapper[4721]: E0128 19:02:47.531883 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:02:58 crc kubenswrapper[4721]: I0128 19:02:58.529863 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:02:58 crc kubenswrapper[4721]: E0128 19:02:58.530618 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:03:09 crc kubenswrapper[4721]: I0128 19:03:09.530184 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:03:09 crc kubenswrapper[4721]: E0128 19:03:09.530955 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:03:17 crc kubenswrapper[4721]: I0128 19:03:17.225909 4721 scope.go:117] "RemoveContainer" containerID="1d4b415623058842553907d8381640f0930cc4c750bf6fa7037c1a2afc1fcfc0" Jan 28 19:03:24 crc kubenswrapper[4721]: I0128 19:03:24.529203 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:03:24 crc kubenswrapper[4721]: E0128 19:03:24.530092 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:03:36 crc kubenswrapper[4721]: I0128 19:03:36.530328 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:03:36 crc kubenswrapper[4721]: E0128 19:03:36.531421 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:03:48 crc kubenswrapper[4721]: I0128 19:03:48.530199 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:03:48 crc kubenswrapper[4721]: E0128 19:03:48.531104 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:04:00 crc kubenswrapper[4721]: I0128 19:04:00.529025 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:04:00 crc kubenswrapper[4721]: E0128 19:04:00.530427 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:04:14 crc kubenswrapper[4721]: I0128 19:04:14.528817 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:04:14 crc kubenswrapper[4721]: I0128 19:04:14.864484 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"fae7b05413d2179da0c14f97f482c9d932655828a3eba9c206bbef238e41c9d7"} Jan 28 19:04:14 crc kubenswrapper[4721]: I0128 19:04:14.883400 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" podStartSLOduration=181.243715437 podStartE2EDuration="3m1.883378643s" podCreationTimestamp="2026-01-28 19:01:13 +0000 UTC" firstStartedPulling="2026-01-28 19:01:14.853705395 +0000 UTC m=+1640.579010955" lastFinishedPulling="2026-01-28 19:01:15.493368601 +0000 UTC m=+1641.218674161" observedRunningTime="2026-01-28 19:01:15.864787452 +0000 UTC m=+1641.590093022" watchObservedRunningTime="2026-01-28 19:04:14.883378643 +0000 UTC m=+1820.608684203" Jan 28 19:04:28 crc kubenswrapper[4721]: I0128 19:04:28.010408 4721 generic.go:334] "Generic (PLEG): container finished" podID="aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" containerID="f3d362401b2d2e8718dd2a07ef113b7dd6f023b2a78e2992ef4936e19d82e6a7" exitCode=0 Jan 28 19:04:28 crc kubenswrapper[4721]: I0128 19:04:28.010913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" event={"ID":"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1","Type":"ContainerDied","Data":"f3d362401b2d2e8718dd2a07ef113b7dd6f023b2a78e2992ef4936e19d82e6a7"} Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.525469 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.623891 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-ssh-key-openstack-edpm-ipam\") pod \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.623988 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bktw\" (UniqueName: \"kubernetes.io/projected/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-kube-api-access-7bktw\") pod \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.624046 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-bootstrap-combined-ca-bundle\") pod \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.624106 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-inventory\") pod \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\" (UID: \"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1\") " Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.631236 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" (UID: "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.631868 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-kube-api-access-7bktw" (OuterVolumeSpecName: "kube-api-access-7bktw") pod "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" (UID: "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1"). InnerVolumeSpecName "kube-api-access-7bktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.657992 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-inventory" (OuterVolumeSpecName: "inventory") pod "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" (UID: "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.664922 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" (UID: "aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.726255 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bktw\" (UniqueName: \"kubernetes.io/projected/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-kube-api-access-7bktw\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.726301 4721 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.726313 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4721]: I0128 19:04:29.726325 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.057286 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" event={"ID":"aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1","Type":"ContainerDied","Data":"4262d1c5c83719ba14cec77e3528a1473b5041f22328e281e41daf6cc4ada4de"} Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.057610 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4262d1c5c83719ba14cec77e3528a1473b5041f22328e281e41daf6cc4ada4de" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.057818 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sw887" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.165654 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq"] Jan 28 19:04:30 crc kubenswrapper[4721]: E0128 19:04:30.166259 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.166287 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.166532 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.169434 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.179104 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq"] Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.181958 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.182322 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.183244 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.183531 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.250237 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.250498 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.251070 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb82\" (UniqueName: \"kubernetes.io/projected/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-kube-api-access-6lb82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.353268 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lb82\" (UniqueName: \"kubernetes.io/projected/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-kube-api-access-6lb82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.353367 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.353451 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.360426 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.360560 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.369555 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lb82\" (UniqueName: \"kubernetes.io/projected/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-kube-api-access-6lb82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-89xnq\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:30 crc kubenswrapper[4721]: I0128 19:04:30.505744 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.056354 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq"] Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.770899 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p496w"] Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.793233 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.819515 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p496w"] Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.892812 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-utilities\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.892981 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5dm\" (UniqueName: \"kubernetes.io/projected/59f6b8f4-b085-4229-96f5-293379203922-kube-api-access-tt5dm\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.893022 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-catalog-content\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.994729 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-utilities\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.994861 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5dm\" (UniqueName: \"kubernetes.io/projected/59f6b8f4-b085-4229-96f5-293379203922-kube-api-access-tt5dm\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.994902 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-catalog-content\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.995458 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-catalog-content\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:31 crc kubenswrapper[4721]: I0128 19:04:31.995459 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-utilities\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.033046 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5dm\" (UniqueName: \"kubernetes.io/projected/59f6b8f4-b085-4229-96f5-293379203922-kube-api-access-tt5dm\") pod \"community-operators-p496w\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.079268 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kr7q2"] Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.100423 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gm24k"] Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.101644 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" event={"ID":"df3fe0a6-94e7-4233-9fb8-cecad5bc5266","Type":"ContainerStarted","Data":"12dee3ac3bb24e396c22081bbdde820cb2422723d3663b99da82a6a21fdc510b"} Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.101689 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" event={"ID":"df3fe0a6-94e7-4233-9fb8-cecad5bc5266","Type":"ContainerStarted","Data":"ab8e0294de0f9c48a1e58c25384b4a620bc66a0c50131690341573144fbc48e2"} Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.118463 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kr7q2"] Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.133044 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gm24k"] Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.138572 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.196429 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-57fd-account-create-update-g9drk"] Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.212240 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-57fd-account-create-update-g9drk"] Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.212316 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" podStartSLOduration=1.820612662 podStartE2EDuration="2.212281252s" podCreationTimestamp="2026-01-28 19:04:30 +0000 UTC" firstStartedPulling="2026-01-28 19:04:31.066385357 +0000 UTC m=+1836.791690917" lastFinishedPulling="2026-01-28 19:04:31.458053947 +0000 UTC m=+1837.183359507" observedRunningTime="2026-01-28 19:04:32.11842272 +0000 UTC m=+1837.843728280" watchObservedRunningTime="2026-01-28 19:04:32.212281252 +0000 UTC m=+1837.937586812" Jan 28 19:04:32 crc kubenswrapper[4721]: W0128 19:04:32.724641 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59f6b8f4_b085_4229_96f5_293379203922.slice/crio-ec4b648f12ef6ac6c10e64c6629e432f39c8567d79b1c31ffb0fdb828d28ada2 WatchSource:0}: Error finding container ec4b648f12ef6ac6c10e64c6629e432f39c8567d79b1c31ffb0fdb828d28ada2: Status 404 returned error can't find the container with id ec4b648f12ef6ac6c10e64c6629e432f39c8567d79b1c31ffb0fdb828d28ada2 Jan 28 19:04:32 crc kubenswrapper[4721]: I0128 19:04:32.731967 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p496w"] Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.048647 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lbv9r"] Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.061207 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9597-account-create-update-7bj94"] Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.072475 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lbv9r"] Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.083246 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9597-account-create-update-7bj94"] Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.121321 4721 generic.go:334] "Generic (PLEG): container finished" podID="59f6b8f4-b085-4229-96f5-293379203922" containerID="bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff" exitCode=0 Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.121429 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerDied","Data":"bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff"} Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.121481 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerStarted","Data":"ec4b648f12ef6ac6c10e64c6629e432f39c8567d79b1c31ffb0fdb828d28ada2"} Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.542361 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f959669-d607-4e65-9b7a-50f0a5d73c6a" path="/var/lib/kubelet/pods/4f959669-d607-4e65-9b7a-50f0a5d73c6a/volumes" Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.543365 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0" path="/var/lib/kubelet/pods/75e7aaa4-cc36-4c7c-b5bf-79cc1ecceef0/volumes" Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.543961 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f5f923-3ee7-4416-bba4-03d51578c8c4" path="/var/lib/kubelet/pods/80f5f923-3ee7-4416-bba4-03d51578c8c4/volumes" Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.544781 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af20b569-c763-4033-8b7b-df1ce95dcba2" path="/var/lib/kubelet/pods/af20b569-c763-4033-8b7b-df1ce95dcba2/volumes" Jan 28 19:04:33 crc kubenswrapper[4721]: I0128 19:04:33.546700 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba9db5f-dcb9-460b-abdd-144249ee3c13" path="/var/lib/kubelet/pods/eba9db5f-dcb9-460b-abdd-144249ee3c13/volumes" Jan 28 19:04:34 crc kubenswrapper[4721]: I0128 19:04:34.036895 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-feb7-account-create-update-hztgg"] Jan 28 19:04:34 crc kubenswrapper[4721]: I0128 19:04:34.047857 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-feb7-account-create-update-hztgg"] Jan 28 19:04:35 crc kubenswrapper[4721]: I0128 19:04:35.145417 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerStarted","Data":"9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69"} Jan 28 19:04:35 crc kubenswrapper[4721]: I0128 19:04:35.541258 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bea9fca-3e0f-4158-ba76-aa184abd2d4c" path="/var/lib/kubelet/pods/5bea9fca-3e0f-4158-ba76-aa184abd2d4c/volumes" Jan 28 19:04:37 crc kubenswrapper[4721]: I0128 19:04:37.168452 4721 generic.go:334] "Generic (PLEG): container finished" podID="59f6b8f4-b085-4229-96f5-293379203922" containerID="9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69" exitCode=0 Jan 28 19:04:37 crc kubenswrapper[4721]: I0128 19:04:37.168537 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerDied","Data":"9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69"} Jan 28 19:04:38 crc kubenswrapper[4721]: I0128 19:04:38.182483 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerStarted","Data":"11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d"} Jan 28 19:04:38 crc kubenswrapper[4721]: I0128 19:04:38.201850 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p496w" podStartSLOduration=2.5107825679999998 podStartE2EDuration="7.20182192s" podCreationTimestamp="2026-01-28 19:04:31 +0000 UTC" firstStartedPulling="2026-01-28 19:04:33.124107455 +0000 UTC m=+1838.849413015" lastFinishedPulling="2026-01-28 19:04:37.815146807 +0000 UTC m=+1843.540452367" observedRunningTime="2026-01-28 19:04:38.199844148 +0000 UTC m=+1843.925149708" watchObservedRunningTime="2026-01-28 19:04:38.20182192 +0000 UTC m=+1843.927127480" Jan 28 19:04:42 crc kubenswrapper[4721]: I0128 19:04:42.139806 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:42 crc kubenswrapper[4721]: I0128 19:04:42.141317 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:42 crc kubenswrapper[4721]: I0128 19:04:42.183722 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:48 crc kubenswrapper[4721]: I0128 19:04:48.047155 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2mm9f"] Jan 28 19:04:48 crc kubenswrapper[4721]: I0128 19:04:48.059853 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2mm9f"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.041838 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-774d-account-create-update-gkltd"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.060558 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-869a-account-create-update-ndnwg"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.072483 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-774d-account-create-update-gkltd"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.083778 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-62wbn"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.095143 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-869a-account-create-update-ndnwg"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.107362 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-62wbn"] Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.540407 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42933b28-6c8f-4536-9be5-69b88a0d1390" path="/var/lib/kubelet/pods/42933b28-6c8f-4536-9be5-69b88a0d1390/volumes" Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.541633 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639fd412-92fd-4dc3-bc89-c75178b7d83e" path="/var/lib/kubelet/pods/639fd412-92fd-4dc3-bc89-c75178b7d83e/volumes" Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.542342 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb21066a-3041-482d-a9bc-1e630bca568a" path="/var/lib/kubelet/pods/cb21066a-3041-482d-a9bc-1e630bca568a/volumes" Jan 28 19:04:49 crc kubenswrapper[4721]: I0128 19:04:49.542890 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93a26f9-04bd-4215-a6fb-230626a1e376" path="/var/lib/kubelet/pods/d93a26f9-04bd-4215-a6fb-230626a1e376/volumes" Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.037568 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-0f57-account-create-update-qgx95"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.066982 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7tqqv"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.085490 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-hs5gk"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.106918 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-efa1-account-create-update-kvj9r"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.124827 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-0f57-account-create-update-qgx95"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.139856 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7tqqv"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.150428 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-hs5gk"] Jan 28 19:04:50 crc kubenswrapper[4721]: I0128 19:04:50.160609 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-efa1-account-create-update-kvj9r"] Jan 28 19:04:51 crc kubenswrapper[4721]: I0128 19:04:51.542922 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0104f7d3-7be4-411b-8ca7-89c72b31b43d" path="/var/lib/kubelet/pods/0104f7d3-7be4-411b-8ca7-89c72b31b43d/volumes" Jan 28 19:04:51 crc kubenswrapper[4721]: I0128 19:04:51.543908 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebde931-976a-4436-a92f-a5a5d44fdc11" path="/var/lib/kubelet/pods/4ebde931-976a-4436-a92f-a5a5d44fdc11/volumes" Jan 28 19:04:51 crc kubenswrapper[4721]: I0128 19:04:51.544597 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de37217-5d22-4a9e-9e27-f1ed05b2d63e" path="/var/lib/kubelet/pods/5de37217-5d22-4a9e-9e27-f1ed05b2d63e/volumes" Jan 28 19:04:51 crc kubenswrapper[4721]: I0128 19:04:51.545328 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd381740-3e1d-456b-b5f1-e19f679513da" path="/var/lib/kubelet/pods/cd381740-3e1d-456b-b5f1-e19f679513da/volumes" Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.187586 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.254440 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p496w"] Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.329548 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p496w" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="registry-server" containerID="cri-o://11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d" gracePeriod=2 Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.912255 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.969104 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-utilities\") pod \"59f6b8f4-b085-4229-96f5-293379203922\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.969155 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-catalog-content\") pod \"59f6b8f4-b085-4229-96f5-293379203922\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.969247 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5dm\" (UniqueName: \"kubernetes.io/projected/59f6b8f4-b085-4229-96f5-293379203922-kube-api-access-tt5dm\") pod \"59f6b8f4-b085-4229-96f5-293379203922\" (UID: \"59f6b8f4-b085-4229-96f5-293379203922\") " Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.969885 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-utilities" (OuterVolumeSpecName: "utilities") pod "59f6b8f4-b085-4229-96f5-293379203922" (UID: "59f6b8f4-b085-4229-96f5-293379203922"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.970355 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:52 crc kubenswrapper[4721]: I0128 19:04:52.975812 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f6b8f4-b085-4229-96f5-293379203922-kube-api-access-tt5dm" (OuterVolumeSpecName: "kube-api-access-tt5dm") pod "59f6b8f4-b085-4229-96f5-293379203922" (UID: "59f6b8f4-b085-4229-96f5-293379203922"). InnerVolumeSpecName "kube-api-access-tt5dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.024552 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59f6b8f4-b085-4229-96f5-293379203922" (UID: "59f6b8f4-b085-4229-96f5-293379203922"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.074244 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59f6b8f4-b085-4229-96f5-293379203922-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.074298 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt5dm\" (UniqueName: \"kubernetes.io/projected/59f6b8f4-b085-4229-96f5-293379203922-kube-api-access-tt5dm\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.344673 4721 generic.go:334] "Generic (PLEG): container finished" podID="59f6b8f4-b085-4229-96f5-293379203922" containerID="11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d" exitCode=0 Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.344813 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerDied","Data":"11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d"} Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.345291 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p496w" event={"ID":"59f6b8f4-b085-4229-96f5-293379203922","Type":"ContainerDied","Data":"ec4b648f12ef6ac6c10e64c6629e432f39c8567d79b1c31ffb0fdb828d28ada2"} Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.345328 4721 scope.go:117] "RemoveContainer" containerID="11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.344865 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p496w" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.380753 4721 scope.go:117] "RemoveContainer" containerID="9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.390316 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p496w"] Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.402529 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p496w"] Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.420113 4721 scope.go:117] "RemoveContainer" containerID="bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.473692 4721 scope.go:117] "RemoveContainer" containerID="11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d" Jan 28 19:04:53 crc kubenswrapper[4721]: E0128 19:04:53.474358 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d\": container with ID starting with 11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d not found: ID does not exist" containerID="11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.474410 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d"} err="failed to get container status \"11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d\": rpc error: code = NotFound desc = could not find container \"11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d\": container with ID starting with 11363763c8182a7070d297ca4d1000e1d4ac334257c60abf44ba614028fa892d not found: ID does not exist" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.474446 4721 scope.go:117] "RemoveContainer" containerID="9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69" Jan 28 19:04:53 crc kubenswrapper[4721]: E0128 19:04:53.474756 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69\": container with ID starting with 9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69 not found: ID does not exist" containerID="9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.474810 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69"} err="failed to get container status \"9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69\": rpc error: code = NotFound desc = could not find container \"9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69\": container with ID starting with 9e2727fe8f3071c19ecf62c16f4533c038ca39d5e4e035f75a33f95361176d69 not found: ID does not exist" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.474845 4721 scope.go:117] "RemoveContainer" containerID="bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff" Jan 28 19:04:53 crc kubenswrapper[4721]: E0128 19:04:53.475451 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff\": container with ID starting with bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff not found: ID does not exist" containerID="bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.475484 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff"} err="failed to get container status \"bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff\": rpc error: code = NotFound desc = could not find container \"bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff\": container with ID starting with bcfad2496ed259e402e01caeb17a9f32698d1bc33dde51804b04c72f6d294eff not found: ID does not exist" Jan 28 19:04:53 crc kubenswrapper[4721]: I0128 19:04:53.544835 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f6b8f4-b085-4229-96f5-293379203922" path="/var/lib/kubelet/pods/59f6b8f4-b085-4229-96f5-293379203922/volumes" Jan 28 19:05:09 crc kubenswrapper[4721]: I0128 19:05:09.034208 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c9jld"] Jan 28 19:05:09 crc kubenswrapper[4721]: I0128 19:05:09.045096 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c9jld"] Jan 28 19:05:09 crc kubenswrapper[4721]: I0128 19:05:09.540942 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9296aa-fff6-4aa4-afb6-56acc232bbc7" path="/var/lib/kubelet/pods/1d9296aa-fff6-4aa4-afb6-56acc232bbc7/volumes" Jan 28 19:05:11 crc kubenswrapper[4721]: I0128 19:05:11.047519 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wppp8"] Jan 28 19:05:11 crc kubenswrapper[4721]: I0128 19:05:11.061670 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wppp8"] Jan 28 19:05:11 crc kubenswrapper[4721]: I0128 19:05:11.544455 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06674c33-d387-4999-9e87-d72f80b98173" path="/var/lib/kubelet/pods/06674c33-d387-4999-9e87-d72f80b98173/volumes" Jan 28 19:05:17 crc kubenswrapper[4721]: I0128 19:05:17.332614 4721 scope.go:117] "RemoveContainer" containerID="c519cdf147973b988142d693d0e7b374342906ea5eab4c1b7dba7fe8a570693f" Jan 28 19:05:17 crc kubenswrapper[4721]: I0128 19:05:17.374824 4721 scope.go:117] "RemoveContainer" containerID="bf47bae6ef3c1b70abd14a4b919bb993808a56697a57f141e8443ce15d6f7e9c" Jan 28 19:05:17 crc kubenswrapper[4721]: I0128 19:05:17.438066 4721 scope.go:117] "RemoveContainer" containerID="6b1302db28f921c465f5629bcda6656cba736c2d2ead364062d9e7d8636b730d" Jan 28 19:05:17 crc kubenswrapper[4721]: I0128 19:05:17.507520 4721 scope.go:117] "RemoveContainer" containerID="87b52b27d9e18cb3bfef076ebb8b401f3b1d2e0cec15367a10090eed3dafb376" Jan 28 19:05:17 crc kubenswrapper[4721]: I0128 19:05:17.582679 4721 scope.go:117] "RemoveContainer" containerID="f8fe8f273067fd5aa26440f82d24220fb75b283b9e4944509f492473a1e565ec" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.290712 4721 scope.go:117] "RemoveContainer" containerID="142f6c127468dd0656340d9b6dc3a67d1a2a8ffc34f5655e80d62fda449184c9" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.343061 4721 scope.go:117] "RemoveContainer" containerID="d90feaace03cfa5caa58fbf53981257232daa585103c8a7a6929b4e7b58b3581" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.405274 4721 scope.go:117] "RemoveContainer" containerID="4c1f69bbe56a4bfefb6258d0b5d89ef49a79ac04222a53855a2000ea0e47f913" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.484894 4721 scope.go:117] "RemoveContainer" containerID="64a92fda9552be03fcca0561239e0c782cdd2538b99c6270cae1e5419793eef2" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.541837 4721 scope.go:117] "RemoveContainer" containerID="a2ac82e74e2ec28298b95675c7d0747ddfe6755e7a6d80ee6c02a96d121876e0" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.564853 4721 scope.go:117] "RemoveContainer" containerID="578547330f0c76fc1a97308f98cc4ad2453a550a0535fd61e65ddb32941cf36a" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.587942 4721 scope.go:117] "RemoveContainer" containerID="e49939efe79b61e57415a7d5e53c0f9cdab0563733da9d4be04b586be0385837" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.642286 4721 scope.go:117] "RemoveContainer" containerID="cee1bd39ce919d92de23bfdc6ec78393295deb10adae7e6898343d527cb44555" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.702208 4721 scope.go:117] "RemoveContainer" containerID="9f20de7824405b55e7c6fecfaa65eb0693cfc52624e4ca92b0e8158a5fdeef9f" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.733093 4721 scope.go:117] "RemoveContainer" containerID="2dfc24c236e278cc4c79f641c3e0e465eebffb8e1ee210da08df670eec2a3c49" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.757977 4721 scope.go:117] "RemoveContainer" containerID="0998ac1f150838c3b179689502137d9643af4a583bef0e57d4c847266deaeb80" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.800686 4721 scope.go:117] "RemoveContainer" containerID="1131293acabf7e063141eb04cb26bff8b8ff33f86e13d3617a9d84a5744a2a25" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.826718 4721 scope.go:117] "RemoveContainer" containerID="835c4a8ecea505868024d79d258e3fc5477ba3ab2a7f824d022af4baa82da044" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.855909 4721 scope.go:117] "RemoveContainer" containerID="2d5b90e6a30cc433594ec6b19e88fc4d298215cce89eb24e7cf852b538b363ae" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.879734 4721 scope.go:117] "RemoveContainer" containerID="06307089ab5efe0f0f5f4ca6a469540d89bf820eb634359963970b1808cd407e" Jan 28 19:05:18 crc kubenswrapper[4721]: I0128 19:05:18.909530 4721 scope.go:117] "RemoveContainer" containerID="80711faad5b6f4456cf84b27a0bbf117f4e397d1e6a89e93ec3e42813d189999" Jan 28 19:05:41 crc kubenswrapper[4721]: E0128 19:05:41.796321 4721 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: EOF, stdout: , stderr: , exit code -1" containerID="8838b6289c86dc56e2eb455d502d2f3a242ae9709573b87a2d20fad3ad1e9cc9" cmd=["/usr/local/bin/container-scripts/ovsdb_server_liveness.sh"] Jan 28 19:06:07 crc kubenswrapper[4721]: I0128 19:06:07.059166 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gdk5z"] Jan 28 19:06:07 crc kubenswrapper[4721]: I0128 19:06:07.070545 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-j284c"] Jan 28 19:06:07 crc kubenswrapper[4721]: I0128 19:06:07.082120 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-j284c"] Jan 28 19:06:07 crc kubenswrapper[4721]: I0128 19:06:07.093221 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gdk5z"] Jan 28 19:06:07 crc kubenswrapper[4721]: I0128 19:06:07.543463 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ceee9a0-8f8f-46cc-a090-f31b224fe8a9" path="/var/lib/kubelet/pods/4ceee9a0-8f8f-46cc-a090-f31b224fe8a9/volumes" Jan 28 19:06:07 crc kubenswrapper[4721]: I0128 19:06:07.544742 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2b2524-50e6-4d73-bdb9-8770b642481e" path="/var/lib/kubelet/pods/7b2b2524-50e6-4d73-bdb9-8770b642481e/volumes" Jan 28 19:06:08 crc kubenswrapper[4721]: I0128 19:06:08.034375 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-77gjx"] Jan 28 19:06:08 crc kubenswrapper[4721]: I0128 19:06:08.045024 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-77gjx"] Jan 28 19:06:09 crc kubenswrapper[4721]: I0128 19:06:09.552185 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19551c06-75df-4db7-805a-b7efc5e72018" path="/var/lib/kubelet/pods/19551c06-75df-4db7-805a-b7efc5e72018/volumes" Jan 28 19:06:19 crc kubenswrapper[4721]: I0128 19:06:19.358066 4721 scope.go:117] "RemoveContainer" containerID="2422a3e54852f47cb7dc219e614addb9764635f6263a0de9cc11095c91ee3b2d" Jan 28 19:06:19 crc kubenswrapper[4721]: I0128 19:06:19.394942 4721 scope.go:117] "RemoveContainer" containerID="bb47fdfff808823d5320c16e0aa4f39ad1c5fe30bac981c900e0e8bce17f5d24" Jan 28 19:06:19 crc kubenswrapper[4721]: I0128 19:06:19.465925 4721 scope.go:117] "RemoveContainer" containerID="8ef3f876c4ca4aa8d6bb644b809179eb7dd42addde04ed2b033309027a6a0c2b" Jan 28 19:06:20 crc kubenswrapper[4721]: I0128 19:06:20.046562 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-g5v9q"] Jan 28 19:06:20 crc kubenswrapper[4721]: I0128 19:06:20.063307 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4rqtv"] Jan 28 19:06:20 crc kubenswrapper[4721]: I0128 19:06:20.073113 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-g5v9q"] Jan 28 19:06:20 crc kubenswrapper[4721]: I0128 19:06:20.081803 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4rqtv"] Jan 28 19:06:21 crc kubenswrapper[4721]: I0128 19:06:21.542372 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0d808e-8db2-4d8b-a02e-5f04c991fb44" path="/var/lib/kubelet/pods/5a0d808e-8db2-4d8b-a02e-5f04c991fb44/volumes" Jan 28 19:06:21 crc kubenswrapper[4721]: I0128 19:06:21.543426 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03058f5-d416-467a-b33c-36de7e5b6008" path="/var/lib/kubelet/pods/d03058f5-d416-467a-b33c-36de7e5b6008/volumes" Jan 28 19:06:29 crc kubenswrapper[4721]: I0128 19:06:29.037721 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-spxh4"] Jan 28 19:06:29 crc kubenswrapper[4721]: I0128 19:06:29.049872 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-spxh4"] Jan 28 19:06:29 crc kubenswrapper[4721]: I0128 19:06:29.541331 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7" path="/var/lib/kubelet/pods/b4e7a8f9-bf9f-4093-86d5-b7f5f6d925d7/volumes" Jan 28 19:06:31 crc kubenswrapper[4721]: I0128 19:06:31.224981 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:06:31 crc kubenswrapper[4721]: I0128 19:06:31.226462 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:06:51 crc kubenswrapper[4721]: I0128 19:06:51.625639 4721 generic.go:334] "Generic (PLEG): container finished" podID="df3fe0a6-94e7-4233-9fb8-cecad5bc5266" containerID="12dee3ac3bb24e396c22081bbdde820cb2422723d3663b99da82a6a21fdc510b" exitCode=0 Jan 28 19:06:51 crc kubenswrapper[4721]: I0128 19:06:51.625728 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" event={"ID":"df3fe0a6-94e7-4233-9fb8-cecad5bc5266","Type":"ContainerDied","Data":"12dee3ac3bb24e396c22081bbdde820cb2422723d3663b99da82a6a21fdc510b"} Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.109958 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.309728 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-inventory\") pod \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.310077 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lb82\" (UniqueName: \"kubernetes.io/projected/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-kube-api-access-6lb82\") pod \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.310380 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-ssh-key-openstack-edpm-ipam\") pod \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\" (UID: \"df3fe0a6-94e7-4233-9fb8-cecad5bc5266\") " Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.317093 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-kube-api-access-6lb82" (OuterVolumeSpecName: "kube-api-access-6lb82") pod "df3fe0a6-94e7-4233-9fb8-cecad5bc5266" (UID: "df3fe0a6-94e7-4233-9fb8-cecad5bc5266"). InnerVolumeSpecName "kube-api-access-6lb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.340968 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df3fe0a6-94e7-4233-9fb8-cecad5bc5266" (UID: "df3fe0a6-94e7-4233-9fb8-cecad5bc5266"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.341345 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-inventory" (OuterVolumeSpecName: "inventory") pod "df3fe0a6-94e7-4233-9fb8-cecad5bc5266" (UID: "df3fe0a6-94e7-4233-9fb8-cecad5bc5266"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.412822 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.412864 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.412874 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lb82\" (UniqueName: \"kubernetes.io/projected/df3fe0a6-94e7-4233-9fb8-cecad5bc5266-kube-api-access-6lb82\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.647607 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" event={"ID":"df3fe0a6-94e7-4233-9fb8-cecad5bc5266","Type":"ContainerDied","Data":"ab8e0294de0f9c48a1e58c25384b4a620bc66a0c50131690341573144fbc48e2"} Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.647659 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab8e0294de0f9c48a1e58c25384b4a620bc66a0c50131690341573144fbc48e2" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.647662 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-89xnq" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.737629 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7"] Jan 28 19:06:53 crc kubenswrapper[4721]: E0128 19:06:53.738103 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="extract-utilities" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.738116 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="extract-utilities" Jan 28 19:06:53 crc kubenswrapper[4721]: E0128 19:06:53.738147 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="extract-content" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.738154 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="extract-content" Jan 28 19:06:53 crc kubenswrapper[4721]: E0128 19:06:53.738180 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3fe0a6-94e7-4233-9fb8-cecad5bc5266" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.738187 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3fe0a6-94e7-4233-9fb8-cecad5bc5266" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:53 crc kubenswrapper[4721]: E0128 19:06:53.738200 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="registry-server" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.738206 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="registry-server" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.738417 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f6b8f4-b085-4229-96f5-293379203922" containerName="registry-server" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.738438 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3fe0a6-94e7-4233-9fb8-cecad5bc5266" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.739438 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.743727 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.744084 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.744232 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.745283 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.756413 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7"] Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.833251 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.833327 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.833584 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpsc9\" (UniqueName: \"kubernetes.io/projected/b9946ce2-5895-4b1a-ad88-c80a26d23265-kube-api-access-tpsc9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.935552 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.935607 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.935668 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpsc9\" (UniqueName: \"kubernetes.io/projected/b9946ce2-5895-4b1a-ad88-c80a26d23265-kube-api-access-tpsc9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.941331 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.941903 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:53 crc kubenswrapper[4721]: I0128 19:06:53.952922 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpsc9\" (UniqueName: \"kubernetes.io/projected/b9946ce2-5895-4b1a-ad88-c80a26d23265-kube-api-access-tpsc9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:54 crc kubenswrapper[4721]: I0128 19:06:54.061958 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:06:54 crc kubenswrapper[4721]: I0128 19:06:54.580203 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7"] Jan 28 19:06:54 crc kubenswrapper[4721]: I0128 19:06:54.610555 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:06:54 crc kubenswrapper[4721]: I0128 19:06:54.661533 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" event={"ID":"b9946ce2-5895-4b1a-ad88-c80a26d23265","Type":"ContainerStarted","Data":"fc6c3705914d7d7ae0c76f127ddeef7f8b39b6e63b41fd8379d671507b29c661"} Jan 28 19:06:55 crc kubenswrapper[4721]: I0128 19:06:55.674809 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" event={"ID":"b9946ce2-5895-4b1a-ad88-c80a26d23265","Type":"ContainerStarted","Data":"85b5683e4b8a51a7869ec410860173bc01c66b443c50fb0cba49932408f9f36b"} Jan 28 19:06:55 crc kubenswrapper[4721]: I0128 19:06:55.702632 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" podStartSLOduration=2.326230486 podStartE2EDuration="2.702601615s" podCreationTimestamp="2026-01-28 19:06:53 +0000 UTC" firstStartedPulling="2026-01-28 19:06:54.610139892 +0000 UTC m=+1980.335445452" lastFinishedPulling="2026-01-28 19:06:54.986511021 +0000 UTC m=+1980.711816581" observedRunningTime="2026-01-28 19:06:55.696801813 +0000 UTC m=+1981.422107363" watchObservedRunningTime="2026-01-28 19:06:55.702601615 +0000 UTC m=+1981.427907175" Jan 28 19:07:01 crc kubenswrapper[4721]: I0128 19:07:01.225332 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:07:01 crc kubenswrapper[4721]: I0128 19:07:01.225905 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.057362 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mpszx"] Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.070347 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6mx9s"] Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.083230 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5b4d-account-create-update-8nt6r"] Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.092771 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mpszx"] Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.098868 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5b4d-account-create-update-8nt6r"] Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.107850 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6mx9s"] Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.548848 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c518e64-69b5-4360-a219-407693412130" path="/var/lib/kubelet/pods/0c518e64-69b5-4360-a219-407693412130/volumes" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.549878 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4" path="/var/lib/kubelet/pods/1a4b4c90-d4cd-4ccd-ae6d-9e071c6d3ed4/volumes" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.553822 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe76996-48bb-4656-8ce3-ac8098700636" path="/var/lib/kubelet/pods/9fe76996-48bb-4656-8ce3-ac8098700636/volumes" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.583159 4721 scope.go:117] "RemoveContainer" containerID="b445bc2491348a67304e8419dbea9a9ee5a3764ff161fd483a703bc1ebe6f122" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.629475 4721 scope.go:117] "RemoveContainer" containerID="597c8ff5bbfa741fc36a77a13a481561fd9d1f9c1b4b2f6d4b1ec4fc5311f690" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.664445 4721 scope.go:117] "RemoveContainer" containerID="29ccd2e322952548c13cc7d2af0107fc873f99ee27ce312b7118d16c9632610a" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.731845 4721 scope.go:117] "RemoveContainer" containerID="56ab69b31d63a6b1c62dd761dae51e64e5951280529007a760301c0b8d5362ef" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.773021 4721 scope.go:117] "RemoveContainer" containerID="dea0b4596c32b14aa6c542395de1c7c3b3e8187a0308f5d186e77f72d7edd84b" Jan 28 19:07:19 crc kubenswrapper[4721]: I0128 19:07:19.838855 4721 scope.go:117] "RemoveContainer" containerID="1335bd2be285e63cf4500de7e09d4d6f7b0ac2396bb6a2229984b8ca5236b6a3" Jan 28 19:07:20 crc kubenswrapper[4721]: I0128 19:07:20.037911 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xlhjz"] Jan 28 19:07:20 crc kubenswrapper[4721]: I0128 19:07:20.051361 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-26e2-account-create-update-lb8jh"] Jan 28 19:07:20 crc kubenswrapper[4721]: I0128 19:07:20.063788 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6060-account-create-update-6nn4d"] Jan 28 19:07:20 crc kubenswrapper[4721]: I0128 19:07:20.072327 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xlhjz"] Jan 28 19:07:20 crc kubenswrapper[4721]: I0128 19:07:20.083641 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6060-account-create-update-6nn4d"] Jan 28 19:07:20 crc kubenswrapper[4721]: I0128 19:07:20.093535 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-26e2-account-create-update-lb8jh"] Jan 28 19:07:21 crc kubenswrapper[4721]: I0128 19:07:21.540374 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866bb191-d801-4191-b725-52648c9d38bf" path="/var/lib/kubelet/pods/866bb191-d801-4191-b725-52648c9d38bf/volumes" Jan 28 19:07:21 crc kubenswrapper[4721]: I0128 19:07:21.541269 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f49e85fc-9126-4151-980f-56517e1752c1" path="/var/lib/kubelet/pods/f49e85fc-9126-4151-980f-56517e1752c1/volumes" Jan 28 19:07:21 crc kubenswrapper[4721]: I0128 19:07:21.541783 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0e7c7c-c624-4b67-ae51-1a40265dfeb9" path="/var/lib/kubelet/pods/fd0e7c7c-c624-4b67-ae51-1a40265dfeb9/volumes" Jan 28 19:07:31 crc kubenswrapper[4721]: I0128 19:07:31.224748 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:07:31 crc kubenswrapper[4721]: I0128 19:07:31.225400 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:07:31 crc kubenswrapper[4721]: I0128 19:07:31.225507 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:07:31 crc kubenswrapper[4721]: I0128 19:07:31.226599 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fae7b05413d2179da0c14f97f482c9d932655828a3eba9c206bbef238e41c9d7"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:07:31 crc kubenswrapper[4721]: I0128 19:07:31.226665 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://fae7b05413d2179da0c14f97f482c9d932655828a3eba9c206bbef238e41c9d7" gracePeriod=600 Jan 28 19:07:32 crc kubenswrapper[4721]: I0128 19:07:32.074428 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="fae7b05413d2179da0c14f97f482c9d932655828a3eba9c206bbef238e41c9d7" exitCode=0 Jan 28 19:07:32 crc kubenswrapper[4721]: I0128 19:07:32.074507 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"fae7b05413d2179da0c14f97f482c9d932655828a3eba9c206bbef238e41c9d7"} Jan 28 19:07:32 crc kubenswrapper[4721]: I0128 19:07:32.075240 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47"} Jan 28 19:07:32 crc kubenswrapper[4721]: I0128 19:07:32.075272 4721 scope.go:117] "RemoveContainer" containerID="2590a7cb210dbff0e84ad585e4d82733cd80880f3d1297edf670e3d6faf23070" Jan 28 19:07:59 crc kubenswrapper[4721]: I0128 19:07:59.330914 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" event={"ID":"b9946ce2-5895-4b1a-ad88-c80a26d23265","Type":"ContainerDied","Data":"85b5683e4b8a51a7869ec410860173bc01c66b443c50fb0cba49932408f9f36b"} Jan 28 19:07:59 crc kubenswrapper[4721]: I0128 19:07:59.330864 4721 generic.go:334] "Generic (PLEG): container finished" podID="b9946ce2-5895-4b1a-ad88-c80a26d23265" containerID="85b5683e4b8a51a7869ec410860173bc01c66b443c50fb0cba49932408f9f36b" exitCode=0 Jan 28 19:08:00 crc kubenswrapper[4721]: I0128 19:08:00.901693 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.042729 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-inventory\") pod \"b9946ce2-5895-4b1a-ad88-c80a26d23265\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.042781 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpsc9\" (UniqueName: \"kubernetes.io/projected/b9946ce2-5895-4b1a-ad88-c80a26d23265-kube-api-access-tpsc9\") pod \"b9946ce2-5895-4b1a-ad88-c80a26d23265\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.043112 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-ssh-key-openstack-edpm-ipam\") pod \"b9946ce2-5895-4b1a-ad88-c80a26d23265\" (UID: \"b9946ce2-5895-4b1a-ad88-c80a26d23265\") " Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.060219 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9946ce2-5895-4b1a-ad88-c80a26d23265-kube-api-access-tpsc9" (OuterVolumeSpecName: "kube-api-access-tpsc9") pod "b9946ce2-5895-4b1a-ad88-c80a26d23265" (UID: "b9946ce2-5895-4b1a-ad88-c80a26d23265"). InnerVolumeSpecName "kube-api-access-tpsc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.078303 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b9946ce2-5895-4b1a-ad88-c80a26d23265" (UID: "b9946ce2-5895-4b1a-ad88-c80a26d23265"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.091602 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-inventory" (OuterVolumeSpecName: "inventory") pod "b9946ce2-5895-4b1a-ad88-c80a26d23265" (UID: "b9946ce2-5895-4b1a-ad88-c80a26d23265"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.146220 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.146259 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpsc9\" (UniqueName: \"kubernetes.io/projected/b9946ce2-5895-4b1a-ad88-c80a26d23265-kube-api-access-tpsc9\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.146274 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9946ce2-5895-4b1a-ad88-c80a26d23265-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.354758 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" event={"ID":"b9946ce2-5895-4b1a-ad88-c80a26d23265","Type":"ContainerDied","Data":"fc6c3705914d7d7ae0c76f127ddeef7f8b39b6e63b41fd8379d671507b29c661"} Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.354804 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6c3705914d7d7ae0c76f127ddeef7f8b39b6e63b41fd8379d671507b29c661" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.354854 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.460379 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl"] Jan 28 19:08:01 crc kubenswrapper[4721]: E0128 19:08:01.461112 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9946ce2-5895-4b1a-ad88-c80a26d23265" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.461139 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9946ce2-5895-4b1a-ad88-c80a26d23265" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.461419 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9946ce2-5895-4b1a-ad88-c80a26d23265" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.462377 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.465362 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.466107 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.466275 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.466513 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.475480 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl"] Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.578947 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.579375 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njc2t\" (UniqueName: \"kubernetes.io/projected/e3cd0640-8d09-4743-8e9e-cc3914803f8c-kube-api-access-njc2t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.579444 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.682407 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.682528 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njc2t\" (UniqueName: \"kubernetes.io/projected/e3cd0640-8d09-4743-8e9e-cc3914803f8c-kube-api-access-njc2t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.682581 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.694881 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.695281 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.704653 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njc2t\" (UniqueName: \"kubernetes.io/projected/e3cd0640-8d09-4743-8e9e-cc3914803f8c-kube-api-access-njc2t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:01 crc kubenswrapper[4721]: I0128 19:08:01.780727 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:02 crc kubenswrapper[4721]: I0128 19:08:02.334016 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl"] Jan 28 19:08:02 crc kubenswrapper[4721]: I0128 19:08:02.372876 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" event={"ID":"e3cd0640-8d09-4743-8e9e-cc3914803f8c","Type":"ContainerStarted","Data":"dee2b97dfd09e937030548e20dd6bf2ae65d2d6322bc33c264322bd4c55b3634"} Jan 28 19:08:03 crc kubenswrapper[4721]: I0128 19:08:03.385162 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" event={"ID":"e3cd0640-8d09-4743-8e9e-cc3914803f8c","Type":"ContainerStarted","Data":"a4bea13b0b9ee81c667eab1d415e5291a438066170d787e1ee354da687057243"} Jan 28 19:08:03 crc kubenswrapper[4721]: I0128 19:08:03.401586 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" podStartSLOduration=1.924925176 podStartE2EDuration="2.401569087s" podCreationTimestamp="2026-01-28 19:08:01 +0000 UTC" firstStartedPulling="2026-01-28 19:08:02.340263851 +0000 UTC m=+2048.065569411" lastFinishedPulling="2026-01-28 19:08:02.816907762 +0000 UTC m=+2048.542213322" observedRunningTime="2026-01-28 19:08:03.400805393 +0000 UTC m=+2049.126110953" watchObservedRunningTime="2026-01-28 19:08:03.401569087 +0000 UTC m=+2049.126874647" Jan 28 19:08:04 crc kubenswrapper[4721]: I0128 19:08:04.049700 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfg9f"] Jan 28 19:08:04 crc kubenswrapper[4721]: I0128 19:08:04.062375 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lfg9f"] Jan 28 19:08:05 crc kubenswrapper[4721]: I0128 19:08:05.543127 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d055f2af-0a9e-4a1e-af6b-b15c0287fc72" path="/var/lib/kubelet/pods/d055f2af-0a9e-4a1e-af6b-b15c0287fc72/volumes" Jan 28 19:08:07 crc kubenswrapper[4721]: E0128 19:08:07.935402 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cd0640_8d09_4743_8e9e_cc3914803f8c.slice/crio-a4bea13b0b9ee81c667eab1d415e5291a438066170d787e1ee354da687057243.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cd0640_8d09_4743_8e9e_cc3914803f8c.slice/crio-conmon-a4bea13b0b9ee81c667eab1d415e5291a438066170d787e1ee354da687057243.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:08:08 crc kubenswrapper[4721]: I0128 19:08:08.434746 4721 generic.go:334] "Generic (PLEG): container finished" podID="e3cd0640-8d09-4743-8e9e-cc3914803f8c" containerID="a4bea13b0b9ee81c667eab1d415e5291a438066170d787e1ee354da687057243" exitCode=0 Jan 28 19:08:08 crc kubenswrapper[4721]: I0128 19:08:08.434844 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" event={"ID":"e3cd0640-8d09-4743-8e9e-cc3914803f8c","Type":"ContainerDied","Data":"a4bea13b0b9ee81c667eab1d415e5291a438066170d787e1ee354da687057243"} Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.055065 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.184969 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-inventory\") pod \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.185367 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-ssh-key-openstack-edpm-ipam\") pod \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.185453 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njc2t\" (UniqueName: \"kubernetes.io/projected/e3cd0640-8d09-4743-8e9e-cc3914803f8c-kube-api-access-njc2t\") pod \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\" (UID: \"e3cd0640-8d09-4743-8e9e-cc3914803f8c\") " Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.191691 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cd0640-8d09-4743-8e9e-cc3914803f8c-kube-api-access-njc2t" (OuterVolumeSpecName: "kube-api-access-njc2t") pod "e3cd0640-8d09-4743-8e9e-cc3914803f8c" (UID: "e3cd0640-8d09-4743-8e9e-cc3914803f8c"). InnerVolumeSpecName "kube-api-access-njc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.223704 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3cd0640-8d09-4743-8e9e-cc3914803f8c" (UID: "e3cd0640-8d09-4743-8e9e-cc3914803f8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.227146 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-inventory" (OuterVolumeSpecName: "inventory") pod "e3cd0640-8d09-4743-8e9e-cc3914803f8c" (UID: "e3cd0640-8d09-4743-8e9e-cc3914803f8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.288893 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.288934 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3cd0640-8d09-4743-8e9e-cc3914803f8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.288944 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njc2t\" (UniqueName: \"kubernetes.io/projected/e3cd0640-8d09-4743-8e9e-cc3914803f8c-kube-api-access-njc2t\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.454867 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" event={"ID":"e3cd0640-8d09-4743-8e9e-cc3914803f8c","Type":"ContainerDied","Data":"dee2b97dfd09e937030548e20dd6bf2ae65d2d6322bc33c264322bd4c55b3634"} Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.454919 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee2b97dfd09e937030548e20dd6bf2ae65d2d6322bc33c264322bd4c55b3634" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.454985 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.535968 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8"] Jan 28 19:08:10 crc kubenswrapper[4721]: E0128 19:08:10.536831 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cd0640-8d09-4743-8e9e-cc3914803f8c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.536858 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cd0640-8d09-4743-8e9e-cc3914803f8c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.537089 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cd0640-8d09-4743-8e9e-cc3914803f8c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.538109 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.540689 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.541002 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.544520 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.544748 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.546146 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8"] Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.699216 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrh9w\" (UniqueName: \"kubernetes.io/projected/240f3ed6-78d3-4839-9d63-71e54d447a8a-kube-api-access-hrh9w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.699269 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.699317 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.801579 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrh9w\" (UniqueName: \"kubernetes.io/projected/240f3ed6-78d3-4839-9d63-71e54d447a8a-kube-api-access-hrh9w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.801643 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.801728 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.806286 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.810762 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.823924 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrh9w\" (UniqueName: \"kubernetes.io/projected/240f3ed6-78d3-4839-9d63-71e54d447a8a-kube-api-access-hrh9w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pqbq8\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:10 crc kubenswrapper[4721]: I0128 19:08:10.858587 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:11 crc kubenswrapper[4721]: I0128 19:08:11.424107 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8"] Jan 28 19:08:11 crc kubenswrapper[4721]: I0128 19:08:11.468398 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" event={"ID":"240f3ed6-78d3-4839-9d63-71e54d447a8a","Type":"ContainerStarted","Data":"cf6d3d16e4c081166ed00265464c331d6e3f2367a7d0476db00b30ca9ff4b2a4"} Jan 28 19:08:12 crc kubenswrapper[4721]: I0128 19:08:12.494852 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" event={"ID":"240f3ed6-78d3-4839-9d63-71e54d447a8a","Type":"ContainerStarted","Data":"cf987898a96ce2be7fb6285c4781d802c57cba902c404717fd54246d70c7cdc3"} Jan 28 19:08:12 crc kubenswrapper[4721]: I0128 19:08:12.527197 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" podStartSLOduration=1.978745134 podStartE2EDuration="2.527148781s" podCreationTimestamp="2026-01-28 19:08:10 +0000 UTC" firstStartedPulling="2026-01-28 19:08:11.423605847 +0000 UTC m=+2057.148911407" lastFinishedPulling="2026-01-28 19:08:11.972009494 +0000 UTC m=+2057.697315054" observedRunningTime="2026-01-28 19:08:12.519583833 +0000 UTC m=+2058.244889403" watchObservedRunningTime="2026-01-28 19:08:12.527148781 +0000 UTC m=+2058.252454341" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.116721 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-226ql"] Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.122890 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.137176 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-226ql"] Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.289133 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-catalog-content\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.289731 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-utilities\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.289798 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4qj\" (UniqueName: \"kubernetes.io/projected/854821c4-fa82-4de5-a584-6bf22393166a-kube-api-access-gg4qj\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.392401 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-utilities\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.392494 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4qj\" (UniqueName: \"kubernetes.io/projected/854821c4-fa82-4de5-a584-6bf22393166a-kube-api-access-gg4qj\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.392664 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-catalog-content\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.392947 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-utilities\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.393293 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-catalog-content\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.414397 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4qj\" (UniqueName: \"kubernetes.io/projected/854821c4-fa82-4de5-a584-6bf22393166a-kube-api-access-gg4qj\") pod \"redhat-operators-226ql\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:13 crc kubenswrapper[4721]: I0128 19:08:13.506029 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:14 crc kubenswrapper[4721]: I0128 19:08:14.006468 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-226ql"] Jan 28 19:08:14 crc kubenswrapper[4721]: W0128 19:08:14.016376 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854821c4_fa82_4de5_a584_6bf22393166a.slice/crio-65ad95594c092308d67357e95912a5821cea7e0bbb0f6779ee542f531da47833 WatchSource:0}: Error finding container 65ad95594c092308d67357e95912a5821cea7e0bbb0f6779ee542f531da47833: Status 404 returned error can't find the container with id 65ad95594c092308d67357e95912a5821cea7e0bbb0f6779ee542f531da47833 Jan 28 19:08:14 crc kubenswrapper[4721]: I0128 19:08:14.517935 4721 generic.go:334] "Generic (PLEG): container finished" podID="854821c4-fa82-4de5-a584-6bf22393166a" containerID="d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e" exitCode=0 Jan 28 19:08:14 crc kubenswrapper[4721]: I0128 19:08:14.518062 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerDied","Data":"d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e"} Jan 28 19:08:14 crc kubenswrapper[4721]: I0128 19:08:14.519241 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerStarted","Data":"65ad95594c092308d67357e95912a5821cea7e0bbb0f6779ee542f531da47833"} Jan 28 19:08:16 crc kubenswrapper[4721]: I0128 19:08:16.545603 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerStarted","Data":"2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a"} Jan 28 19:08:19 crc kubenswrapper[4721]: I0128 19:08:19.991316 4721 scope.go:117] "RemoveContainer" containerID="dbeb009e175800373d048d66becbda38ceaa5b0de078d9eb7ef46ea812bb4f48" Jan 28 19:08:20 crc kubenswrapper[4721]: I0128 19:08:20.022100 4721 scope.go:117] "RemoveContainer" containerID="34155a1134cb338c0cce6443e34dd2d6f34691df46c0b62211d1a871d7d4ba4f" Jan 28 19:08:20 crc kubenswrapper[4721]: I0128 19:08:20.084752 4721 scope.go:117] "RemoveContainer" containerID="1f98430a5afea1fb88ac875610693f286a0dfea7a93072492ab2b95a8d1c1b91" Jan 28 19:08:20 crc kubenswrapper[4721]: I0128 19:08:20.150164 4721 scope.go:117] "RemoveContainer" containerID="e9f5996bd09b4c3e2461f29607268b2a47c043cb6c31391e226db5a728ba00ec" Jan 28 19:08:20 crc kubenswrapper[4721]: I0128 19:08:20.585783 4721 generic.go:334] "Generic (PLEG): container finished" podID="854821c4-fa82-4de5-a584-6bf22393166a" containerID="2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a" exitCode=0 Jan 28 19:08:20 crc kubenswrapper[4721]: I0128 19:08:20.585868 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerDied","Data":"2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a"} Jan 28 19:08:21 crc kubenswrapper[4721]: I0128 19:08:21.634255 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerStarted","Data":"01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d"} Jan 28 19:08:21 crc kubenswrapper[4721]: I0128 19:08:21.669560 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-226ql" podStartSLOduration=2.166874387 podStartE2EDuration="8.669530854s" podCreationTimestamp="2026-01-28 19:08:13 +0000 UTC" firstStartedPulling="2026-01-28 19:08:14.520540753 +0000 UTC m=+2060.245846313" lastFinishedPulling="2026-01-28 19:08:21.02319721 +0000 UTC m=+2066.748502780" observedRunningTime="2026-01-28 19:08:21.658889549 +0000 UTC m=+2067.384195129" watchObservedRunningTime="2026-01-28 19:08:21.669530854 +0000 UTC m=+2067.394836414" Jan 28 19:08:23 crc kubenswrapper[4721]: I0128 19:08:23.507009 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:23 crc kubenswrapper[4721]: I0128 19:08:23.507463 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:24 crc kubenswrapper[4721]: I0128 19:08:24.567913 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-226ql" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:08:24 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:08:24 crc kubenswrapper[4721]: > Jan 28 19:08:32 crc kubenswrapper[4721]: I0128 19:08:32.061956 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gnjqs"] Jan 28 19:08:32 crc kubenswrapper[4721]: I0128 19:08:32.073864 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhzxr"] Jan 28 19:08:32 crc kubenswrapper[4721]: I0128 19:08:32.086221 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhzxr"] Jan 28 19:08:32 crc kubenswrapper[4721]: I0128 19:08:32.096730 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gnjqs"] Jan 28 19:08:33 crc kubenswrapper[4721]: I0128 19:08:33.551315 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0545b1-8866-4f13-b0a4-3425a39e103d" path="/var/lib/kubelet/pods/1a0545b1-8866-4f13-b0a4-3425a39e103d/volumes" Jan 28 19:08:33 crc kubenswrapper[4721]: I0128 19:08:33.552296 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa94acc5-9ec9-4129-ac88-db06e56fa5e1" path="/var/lib/kubelet/pods/fa94acc5-9ec9-4129-ac88-db06e56fa5e1/volumes" Jan 28 19:08:34 crc kubenswrapper[4721]: I0128 19:08:34.554767 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-226ql" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:08:34 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:08:34 crc kubenswrapper[4721]: > Jan 28 19:08:44 crc kubenswrapper[4721]: I0128 19:08:44.562406 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-226ql" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:08:44 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:08:44 crc kubenswrapper[4721]: > Jan 28 19:08:48 crc kubenswrapper[4721]: I0128 19:08:48.910947 4721 generic.go:334] "Generic (PLEG): container finished" podID="240f3ed6-78d3-4839-9d63-71e54d447a8a" containerID="cf987898a96ce2be7fb6285c4781d802c57cba902c404717fd54246d70c7cdc3" exitCode=0 Jan 28 19:08:48 crc kubenswrapper[4721]: I0128 19:08:48.911054 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" event={"ID":"240f3ed6-78d3-4839-9d63-71e54d447a8a","Type":"ContainerDied","Data":"cf987898a96ce2be7fb6285c4781d802c57cba902c404717fd54246d70c7cdc3"} Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.482684 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.672637 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrh9w\" (UniqueName: \"kubernetes.io/projected/240f3ed6-78d3-4839-9d63-71e54d447a8a-kube-api-access-hrh9w\") pod \"240f3ed6-78d3-4839-9d63-71e54d447a8a\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.672884 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-ssh-key-openstack-edpm-ipam\") pod \"240f3ed6-78d3-4839-9d63-71e54d447a8a\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.672973 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-inventory\") pod \"240f3ed6-78d3-4839-9d63-71e54d447a8a\" (UID: \"240f3ed6-78d3-4839-9d63-71e54d447a8a\") " Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.679652 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240f3ed6-78d3-4839-9d63-71e54d447a8a-kube-api-access-hrh9w" (OuterVolumeSpecName: "kube-api-access-hrh9w") pod "240f3ed6-78d3-4839-9d63-71e54d447a8a" (UID: "240f3ed6-78d3-4839-9d63-71e54d447a8a"). InnerVolumeSpecName "kube-api-access-hrh9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.706396 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-inventory" (OuterVolumeSpecName: "inventory") pod "240f3ed6-78d3-4839-9d63-71e54d447a8a" (UID: "240f3ed6-78d3-4839-9d63-71e54d447a8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.716876 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "240f3ed6-78d3-4839-9d63-71e54d447a8a" (UID: "240f3ed6-78d3-4839-9d63-71e54d447a8a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.776213 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrh9w\" (UniqueName: \"kubernetes.io/projected/240f3ed6-78d3-4839-9d63-71e54d447a8a-kube-api-access-hrh9w\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.776505 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.776517 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/240f3ed6-78d3-4839-9d63-71e54d447a8a-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.941929 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" event={"ID":"240f3ed6-78d3-4839-9d63-71e54d447a8a","Type":"ContainerDied","Data":"cf6d3d16e4c081166ed00265464c331d6e3f2367a7d0476db00b30ca9ff4b2a4"} Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.941973 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6d3d16e4c081166ed00265464c331d6e3f2367a7d0476db00b30ca9ff4b2a4" Jan 28 19:08:50 crc kubenswrapper[4721]: I0128 19:08:50.941984 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pqbq8" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.124151 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n"] Jan 28 19:08:51 crc kubenswrapper[4721]: E0128 19:08:51.124701 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240f3ed6-78d3-4839-9d63-71e54d447a8a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.124727 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="240f3ed6-78d3-4839-9d63-71e54d447a8a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.124942 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="240f3ed6-78d3-4839-9d63-71e54d447a8a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.125996 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.128730 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.128875 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.129681 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.131203 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.140922 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n"] Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.288335 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.289245 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp7tc\" (UniqueName: \"kubernetes.io/projected/4d206415-b580-4e09-a6f5-715ea9c2ff06-kube-api-access-wp7tc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.289409 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.392193 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp7tc\" (UniqueName: \"kubernetes.io/projected/4d206415-b580-4e09-a6f5-715ea9c2ff06-kube-api-access-wp7tc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.392311 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.392424 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.397337 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.397981 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.408937 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp7tc\" (UniqueName: \"kubernetes.io/projected/4d206415-b580-4e09-a6f5-715ea9c2ff06-kube-api-access-wp7tc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:51 crc kubenswrapper[4721]: I0128 19:08:51.446132 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:08:52 crc kubenswrapper[4721]: I0128 19:08:52.135542 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n"] Jan 28 19:08:52 crc kubenswrapper[4721]: I0128 19:08:52.979058 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" event={"ID":"4d206415-b580-4e09-a6f5-715ea9c2ff06","Type":"ContainerStarted","Data":"8a163eb7d4dfb392bc942389109e2084d0ffab24961215c3e227f2dc7e89c572"} Jan 28 19:08:53 crc kubenswrapper[4721]: I0128 19:08:53.565326 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:53 crc kubenswrapper[4721]: I0128 19:08:53.629506 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:53 crc kubenswrapper[4721]: I0128 19:08:53.825729 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-226ql"] Jan 28 19:08:53 crc kubenswrapper[4721]: I0128 19:08:53.992492 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" event={"ID":"4d206415-b580-4e09-a6f5-715ea9c2ff06","Type":"ContainerStarted","Data":"50368bd832412608c55d3f751a2df67d0572400842f8436816d93a2f0dfe90f1"} Jan 28 19:08:54 crc kubenswrapper[4721]: I0128 19:08:54.019042 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" podStartSLOduration=2.368254287 podStartE2EDuration="3.01901346s" podCreationTimestamp="2026-01-28 19:08:51 +0000 UTC" firstStartedPulling="2026-01-28 19:08:52.153500138 +0000 UTC m=+2097.878805698" lastFinishedPulling="2026-01-28 19:08:52.804259311 +0000 UTC m=+2098.529564871" observedRunningTime="2026-01-28 19:08:54.010916336 +0000 UTC m=+2099.736221926" watchObservedRunningTime="2026-01-28 19:08:54.01901346 +0000 UTC m=+2099.744319040" Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.002430 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-226ql" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" containerID="cri-o://01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d" gracePeriod=2 Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.638879 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.809718 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4qj\" (UniqueName: \"kubernetes.io/projected/854821c4-fa82-4de5-a584-6bf22393166a-kube-api-access-gg4qj\") pod \"854821c4-fa82-4de5-a584-6bf22393166a\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.810412 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-catalog-content\") pod \"854821c4-fa82-4de5-a584-6bf22393166a\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.810534 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-utilities\") pod \"854821c4-fa82-4de5-a584-6bf22393166a\" (UID: \"854821c4-fa82-4de5-a584-6bf22393166a\") " Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.811248 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-utilities" (OuterVolumeSpecName: "utilities") pod "854821c4-fa82-4de5-a584-6bf22393166a" (UID: "854821c4-fa82-4de5-a584-6bf22393166a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.811349 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.815768 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854821c4-fa82-4de5-a584-6bf22393166a-kube-api-access-gg4qj" (OuterVolumeSpecName: "kube-api-access-gg4qj") pod "854821c4-fa82-4de5-a584-6bf22393166a" (UID: "854821c4-fa82-4de5-a584-6bf22393166a"). InnerVolumeSpecName "kube-api-access-gg4qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.913888 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4qj\" (UniqueName: \"kubernetes.io/projected/854821c4-fa82-4de5-a584-6bf22393166a-kube-api-access-gg4qj\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:55 crc kubenswrapper[4721]: I0128 19:08:55.938497 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "854821c4-fa82-4de5-a584-6bf22393166a" (UID: "854821c4-fa82-4de5-a584-6bf22393166a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.015885 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/854821c4-fa82-4de5-a584-6bf22393166a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.016510 4721 generic.go:334] "Generic (PLEG): container finished" podID="854821c4-fa82-4de5-a584-6bf22393166a" containerID="01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d" exitCode=0 Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.016550 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-226ql" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.016585 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerDied","Data":"01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d"} Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.016633 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-226ql" event={"ID":"854821c4-fa82-4de5-a584-6bf22393166a","Type":"ContainerDied","Data":"65ad95594c092308d67357e95912a5821cea7e0bbb0f6779ee542f531da47833"} Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.016667 4721 scope.go:117] "RemoveContainer" containerID="01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.072265 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-226ql"] Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.091446 4721 scope.go:117] "RemoveContainer" containerID="2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.107072 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-226ql"] Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.205413 4721 scope.go:117] "RemoveContainer" containerID="d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.267459 4721 scope.go:117] "RemoveContainer" containerID="01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d" Jan 28 19:08:56 crc kubenswrapper[4721]: E0128 19:08:56.268142 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d\": container with ID starting with 01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d not found: ID does not exist" containerID="01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.268205 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d"} err="failed to get container status \"01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d\": rpc error: code = NotFound desc = could not find container \"01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d\": container with ID starting with 01ab75a14ef8b5754c642af9224f7aa4379a346502dcdd4e72e127e553bdc62d not found: ID does not exist" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.268245 4721 scope.go:117] "RemoveContainer" containerID="2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a" Jan 28 19:08:56 crc kubenswrapper[4721]: E0128 19:08:56.268569 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a\": container with ID starting with 2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a not found: ID does not exist" containerID="2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.268609 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a"} err="failed to get container status \"2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a\": rpc error: code = NotFound desc = could not find container \"2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a\": container with ID starting with 2cb2adeca474f6640790cee7b637a9eb86f31be9835c9e2cb4c77dc9d48dc77a not found: ID does not exist" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.268633 4721 scope.go:117] "RemoveContainer" containerID="d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e" Jan 28 19:08:56 crc kubenswrapper[4721]: E0128 19:08:56.268901 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e\": container with ID starting with d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e not found: ID does not exist" containerID="d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e" Jan 28 19:08:56 crc kubenswrapper[4721]: I0128 19:08:56.268928 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e"} err="failed to get container status \"d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e\": rpc error: code = NotFound desc = could not find container \"d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e\": container with ID starting with d3c27e03f3085a07c8422abaf8ef1e75f080a0cba312634a5b763cf0f9ab5e2e not found: ID does not exist" Jan 28 19:08:57 crc kubenswrapper[4721]: I0128 19:08:57.540793 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854821c4-fa82-4de5-a584-6bf22393166a" path="/var/lib/kubelet/pods/854821c4-fa82-4de5-a584-6bf22393166a/volumes" Jan 28 19:09:16 crc kubenswrapper[4721]: I0128 19:09:16.050422 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ztjx8"] Jan 28 19:09:16 crc kubenswrapper[4721]: I0128 19:09:16.063300 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ztjx8"] Jan 28 19:09:17 crc kubenswrapper[4721]: I0128 19:09:17.540401 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8717a4d7-cca2-4bd2-bb79-6a034cd7081c" path="/var/lib/kubelet/pods/8717a4d7-cca2-4bd2-bb79-6a034cd7081c/volumes" Jan 28 19:09:20 crc kubenswrapper[4721]: I0128 19:09:20.327385 4721 scope.go:117] "RemoveContainer" containerID="8f1014013f8125055f2dbd76ef01cd7678cacb719231e62e40cec25e622c6bee" Jan 28 19:09:20 crc kubenswrapper[4721]: I0128 19:09:20.385950 4721 scope.go:117] "RemoveContainer" containerID="c48c7d07a9d5bf6ea57ca99af75f3d29c355b924f6a3414c92fb6d5d564782ed" Jan 28 19:09:20 crc kubenswrapper[4721]: I0128 19:09:20.452563 4721 scope.go:117] "RemoveContainer" containerID="f37bc3c1b8fe009a164f59159e105ce9781f64e2db81f8802fe0c83ee99e7799" Jan 28 19:09:31 crc kubenswrapper[4721]: I0128 19:09:31.225004 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:09:31 crc kubenswrapper[4721]: I0128 19:09:31.225731 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:09:45 crc kubenswrapper[4721]: I0128 19:09:45.514352 4721 generic.go:334] "Generic (PLEG): container finished" podID="4d206415-b580-4e09-a6f5-715ea9c2ff06" containerID="50368bd832412608c55d3f751a2df67d0572400842f8436816d93a2f0dfe90f1" exitCode=0 Jan 28 19:09:45 crc kubenswrapper[4721]: I0128 19:09:45.514450 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" event={"ID":"4d206415-b580-4e09-a6f5-715ea9c2ff06","Type":"ContainerDied","Data":"50368bd832412608c55d3f751a2df67d0572400842f8436816d93a2f0dfe90f1"} Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.108123 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.208219 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-ssh-key-openstack-edpm-ipam\") pod \"4d206415-b580-4e09-a6f5-715ea9c2ff06\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.208477 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-inventory\") pod \"4d206415-b580-4e09-a6f5-715ea9c2ff06\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.208734 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp7tc\" (UniqueName: \"kubernetes.io/projected/4d206415-b580-4e09-a6f5-715ea9c2ff06-kube-api-access-wp7tc\") pod \"4d206415-b580-4e09-a6f5-715ea9c2ff06\" (UID: \"4d206415-b580-4e09-a6f5-715ea9c2ff06\") " Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.234374 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d206415-b580-4e09-a6f5-715ea9c2ff06-kube-api-access-wp7tc" (OuterVolumeSpecName: "kube-api-access-wp7tc") pod "4d206415-b580-4e09-a6f5-715ea9c2ff06" (UID: "4d206415-b580-4e09-a6f5-715ea9c2ff06"). InnerVolumeSpecName "kube-api-access-wp7tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.245742 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-inventory" (OuterVolumeSpecName: "inventory") pod "4d206415-b580-4e09-a6f5-715ea9c2ff06" (UID: "4d206415-b580-4e09-a6f5-715ea9c2ff06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.246356 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d206415-b580-4e09-a6f5-715ea9c2ff06" (UID: "4d206415-b580-4e09-a6f5-715ea9c2ff06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.311930 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.311968 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d206415-b580-4e09-a6f5-715ea9c2ff06-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.311979 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp7tc\" (UniqueName: \"kubernetes.io/projected/4d206415-b580-4e09-a6f5-715ea9c2ff06-kube-api-access-wp7tc\") on node \"crc\" DevicePath \"\"" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.553709 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.559135 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n" event={"ID":"4d206415-b580-4e09-a6f5-715ea9c2ff06","Type":"ContainerDied","Data":"8a163eb7d4dfb392bc942389109e2084d0ffab24961215c3e227f2dc7e89c572"} Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.559449 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a163eb7d4dfb392bc942389109e2084d0ffab24961215c3e227f2dc7e89c572" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.651374 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4647t"] Jan 28 19:09:47 crc kubenswrapper[4721]: E0128 19:09:47.652238 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.652337 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" Jan 28 19:09:47 crc kubenswrapper[4721]: E0128 19:09:47.652443 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d206415-b580-4e09-a6f5-715ea9c2ff06" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.652500 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d206415-b580-4e09-a6f5-715ea9c2ff06" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:09:47 crc kubenswrapper[4721]: E0128 19:09:47.652563 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="extract-utilities" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.652618 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="extract-utilities" Jan 28 19:09:47 crc kubenswrapper[4721]: E0128 19:09:47.652695 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="extract-content" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.652755 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="extract-content" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.653061 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="854821c4-fa82-4de5-a584-6bf22393166a" containerName="registry-server" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.653139 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d206415-b580-4e09-a6f5-715ea9c2ff06" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.654057 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.661102 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.661412 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.661917 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.667270 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.689441 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4647t"] Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.721640 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.721710 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbzj\" (UniqueName: \"kubernetes.io/projected/7481db6a-22d8-4e79-a0fc-8dc696d5d209-kube-api-access-qnbzj\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.722264 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.825278 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.825463 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.825490 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbzj\" (UniqueName: \"kubernetes.io/projected/7481db6a-22d8-4e79-a0fc-8dc696d5d209-kube-api-access-qnbzj\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.834257 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.836042 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.845414 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbzj\" (UniqueName: \"kubernetes.io/projected/7481db6a-22d8-4e79-a0fc-8dc696d5d209-kube-api-access-qnbzj\") pod \"ssh-known-hosts-edpm-deployment-4647t\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:47 crc kubenswrapper[4721]: I0128 19:09:47.981516 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:48 crc kubenswrapper[4721]: I0128 19:09:48.537451 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4647t"] Jan 28 19:09:48 crc kubenswrapper[4721]: I0128 19:09:48.565327 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" event={"ID":"7481db6a-22d8-4e79-a0fc-8dc696d5d209","Type":"ContainerStarted","Data":"ce1b19e5c9704452c98ed6b8e49ef497bbe2b438f7d0213d386d9d0728c4f6c2"} Jan 28 19:09:49 crc kubenswrapper[4721]: I0128 19:09:49.577032 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" event={"ID":"7481db6a-22d8-4e79-a0fc-8dc696d5d209","Type":"ContainerStarted","Data":"1f30f384764aecf4e4d97d3bd4b93956c0e0f13064860582cb11ddf3ae6cafba"} Jan 28 19:09:49 crc kubenswrapper[4721]: I0128 19:09:49.599198 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" podStartSLOduration=2.042906262 podStartE2EDuration="2.599157064s" podCreationTimestamp="2026-01-28 19:09:47 +0000 UTC" firstStartedPulling="2026-01-28 19:09:48.541801093 +0000 UTC m=+2154.267106653" lastFinishedPulling="2026-01-28 19:09:49.098051895 +0000 UTC m=+2154.823357455" observedRunningTime="2026-01-28 19:09:49.592418783 +0000 UTC m=+2155.317724363" watchObservedRunningTime="2026-01-28 19:09:49.599157064 +0000 UTC m=+2155.324462624" Jan 28 19:09:56 crc kubenswrapper[4721]: I0128 19:09:56.643103 4721 generic.go:334] "Generic (PLEG): container finished" podID="7481db6a-22d8-4e79-a0fc-8dc696d5d209" containerID="1f30f384764aecf4e4d97d3bd4b93956c0e0f13064860582cb11ddf3ae6cafba" exitCode=0 Jan 28 19:09:56 crc kubenswrapper[4721]: I0128 19:09:56.643185 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" event={"ID":"7481db6a-22d8-4e79-a0fc-8dc696d5d209","Type":"ContainerDied","Data":"1f30f384764aecf4e4d97d3bd4b93956c0e0f13064860582cb11ddf3ae6cafba"} Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.227082 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.269029 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-inventory-0\") pod \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.269138 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-ssh-key-openstack-edpm-ipam\") pod \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.269300 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbzj\" (UniqueName: \"kubernetes.io/projected/7481db6a-22d8-4e79-a0fc-8dc696d5d209-kube-api-access-qnbzj\") pod \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\" (UID: \"7481db6a-22d8-4e79-a0fc-8dc696d5d209\") " Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.277014 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7481db6a-22d8-4e79-a0fc-8dc696d5d209-kube-api-access-qnbzj" (OuterVolumeSpecName: "kube-api-access-qnbzj") pod "7481db6a-22d8-4e79-a0fc-8dc696d5d209" (UID: "7481db6a-22d8-4e79-a0fc-8dc696d5d209"). InnerVolumeSpecName "kube-api-access-qnbzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.299033 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7481db6a-22d8-4e79-a0fc-8dc696d5d209" (UID: "7481db6a-22d8-4e79-a0fc-8dc696d5d209"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.302674 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7481db6a-22d8-4e79-a0fc-8dc696d5d209" (UID: "7481db6a-22d8-4e79-a0fc-8dc696d5d209"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.372391 4721 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.372436 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7481db6a-22d8-4e79-a0fc-8dc696d5d209-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.372451 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbzj\" (UniqueName: \"kubernetes.io/projected/7481db6a-22d8-4e79-a0fc-8dc696d5d209-kube-api-access-qnbzj\") on node \"crc\" DevicePath \"\"" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.665630 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" event={"ID":"7481db6a-22d8-4e79-a0fc-8dc696d5d209","Type":"ContainerDied","Data":"ce1b19e5c9704452c98ed6b8e49ef497bbe2b438f7d0213d386d9d0728c4f6c2"} Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.665679 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1b19e5c9704452c98ed6b8e49ef497bbe2b438f7d0213d386d9d0728c4f6c2" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.665731 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4647t" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.761655 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp"] Jan 28 19:09:58 crc kubenswrapper[4721]: E0128 19:09:58.762312 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7481db6a-22d8-4e79-a0fc-8dc696d5d209" containerName="ssh-known-hosts-edpm-deployment" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.762331 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7481db6a-22d8-4e79-a0fc-8dc696d5d209" containerName="ssh-known-hosts-edpm-deployment" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.762609 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7481db6a-22d8-4e79-a0fc-8dc696d5d209" containerName="ssh-known-hosts-edpm-deployment" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.763732 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.766828 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.766957 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.769839 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.771487 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.775592 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp"] Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.885237 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.885402 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5482t\" (UniqueName: \"kubernetes.io/projected/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-kube-api-access-5482t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.885456 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.987630 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.987912 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5482t\" (UniqueName: \"kubernetes.io/projected/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-kube-api-access-5482t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:58 crc kubenswrapper[4721]: I0128 19:09:58.988501 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:59 crc kubenswrapper[4721]: I0128 19:09:58.993341 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:59 crc kubenswrapper[4721]: I0128 19:09:59.003885 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:59 crc kubenswrapper[4721]: I0128 19:09:59.015121 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5482t\" (UniqueName: \"kubernetes.io/projected/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-kube-api-access-5482t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hsczp\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:59 crc kubenswrapper[4721]: I0128 19:09:59.082723 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:09:59 crc kubenswrapper[4721]: I0128 19:09:59.663083 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp"] Jan 28 19:09:59 crc kubenswrapper[4721]: I0128 19:09:59.677249 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" event={"ID":"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67","Type":"ContainerStarted","Data":"237cfa2548f479f2f58abaa0bf2cba549119c530262ee8838446fe7f7688a6b4"} Jan 28 19:10:00 crc kubenswrapper[4721]: I0128 19:10:00.688862 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" event={"ID":"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67","Type":"ContainerStarted","Data":"e9c2553d3811e29ca8e2b6c2740588e378b5ba3549e2d3f3bdfee35c19983519"} Jan 28 19:10:00 crc kubenswrapper[4721]: I0128 19:10:00.716029 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" podStartSLOduration=2.3124535440000002 podStartE2EDuration="2.716005416s" podCreationTimestamp="2026-01-28 19:09:58 +0000 UTC" firstStartedPulling="2026-01-28 19:09:59.666069819 +0000 UTC m=+2165.391375389" lastFinishedPulling="2026-01-28 19:10:00.069621691 +0000 UTC m=+2165.794927261" observedRunningTime="2026-01-28 19:10:00.706606141 +0000 UTC m=+2166.431911721" watchObservedRunningTime="2026-01-28 19:10:00.716005416 +0000 UTC m=+2166.441310966" Jan 28 19:10:01 crc kubenswrapper[4721]: I0128 19:10:01.225525 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:10:01 crc kubenswrapper[4721]: I0128 19:10:01.225640 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:08 crc kubenswrapper[4721]: I0128 19:10:08.775498 4721 generic.go:334] "Generic (PLEG): container finished" podID="547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" containerID="e9c2553d3811e29ca8e2b6c2740588e378b5ba3549e2d3f3bdfee35c19983519" exitCode=0 Jan 28 19:10:08 crc kubenswrapper[4721]: I0128 19:10:08.775590 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" event={"ID":"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67","Type":"ContainerDied","Data":"e9c2553d3811e29ca8e2b6c2740588e378b5ba3549e2d3f3bdfee35c19983519"} Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.290059 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.360074 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5482t\" (UniqueName: \"kubernetes.io/projected/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-kube-api-access-5482t\") pod \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.360142 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-ssh-key-openstack-edpm-ipam\") pod \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.360397 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-inventory\") pod \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\" (UID: \"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67\") " Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.368573 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-kube-api-access-5482t" (OuterVolumeSpecName: "kube-api-access-5482t") pod "547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" (UID: "547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67"). InnerVolumeSpecName "kube-api-access-5482t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.394620 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" (UID: "547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.394780 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-inventory" (OuterVolumeSpecName: "inventory") pod "547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" (UID: "547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.464556 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5482t\" (UniqueName: \"kubernetes.io/projected/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-kube-api-access-5482t\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.464618 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.464634 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.799461 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" event={"ID":"547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67","Type":"ContainerDied","Data":"237cfa2548f479f2f58abaa0bf2cba549119c530262ee8838446fe7f7688a6b4"} Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.799527 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hsczp" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.799547 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237cfa2548f479f2f58abaa0bf2cba549119c530262ee8838446fe7f7688a6b4" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.870546 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc"] Jan 28 19:10:10 crc kubenswrapper[4721]: E0128 19:10:10.871217 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.871245 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.871543 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.872645 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.874954 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.875535 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.875900 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.879115 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.882520 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc"] Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.977066 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.977216 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825kt\" (UniqueName: \"kubernetes.io/projected/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-kube-api-access-825kt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:10 crc kubenswrapper[4721]: I0128 19:10:10.977254 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.080061 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.080245 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.080284 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825kt\" (UniqueName: \"kubernetes.io/projected/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-kube-api-access-825kt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.086014 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.096070 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.099491 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825kt\" (UniqueName: \"kubernetes.io/projected/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-kube-api-access-825kt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.191249 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.781228 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc"] Jan 28 19:10:11 crc kubenswrapper[4721]: W0128 19:10:11.798004 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dc69ebb_35f6_4a5f_ac8a_58747df158a1.slice/crio-e7dd60e145fe715c44828a2cc1f44d8930560e7ac176163edb5367ebc4969829 WatchSource:0}: Error finding container e7dd60e145fe715c44828a2cc1f44d8930560e7ac176163edb5367ebc4969829: Status 404 returned error can't find the container with id e7dd60e145fe715c44828a2cc1f44d8930560e7ac176163edb5367ebc4969829 Jan 28 19:10:11 crc kubenswrapper[4721]: I0128 19:10:11.813579 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" event={"ID":"5dc69ebb-35f6-4a5f-ac8a-58747df158a1","Type":"ContainerStarted","Data":"e7dd60e145fe715c44828a2cc1f44d8930560e7ac176163edb5367ebc4969829"} Jan 28 19:10:12 crc kubenswrapper[4721]: I0128 19:10:12.829288 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" event={"ID":"5dc69ebb-35f6-4a5f-ac8a-58747df158a1","Type":"ContainerStarted","Data":"1b637f6d8624f57d35a454bba31409ad62b8d0677ce803f6f78a4276ca7a3fe7"} Jan 28 19:10:12 crc kubenswrapper[4721]: I0128 19:10:12.850199 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" podStartSLOduration=2.300381421 podStartE2EDuration="2.85015962s" podCreationTimestamp="2026-01-28 19:10:10 +0000 UTC" firstStartedPulling="2026-01-28 19:10:11.803686201 +0000 UTC m=+2177.528991761" lastFinishedPulling="2026-01-28 19:10:12.3534644 +0000 UTC m=+2178.078769960" observedRunningTime="2026-01-28 19:10:12.846019221 +0000 UTC m=+2178.571324791" watchObservedRunningTime="2026-01-28 19:10:12.85015962 +0000 UTC m=+2178.575465180" Jan 28 19:10:16 crc kubenswrapper[4721]: I0128 19:10:16.052616 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-knwlk"] Jan 28 19:10:16 crc kubenswrapper[4721]: I0128 19:10:16.065003 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-knwlk"] Jan 28 19:10:17 crc kubenswrapper[4721]: I0128 19:10:17.549559 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dda9049-3b48-4939-93cc-542bf5badc4d" path="/var/lib/kubelet/pods/6dda9049-3b48-4939-93cc-542bf5badc4d/volumes" Jan 28 19:10:20 crc kubenswrapper[4721]: I0128 19:10:20.581001 4721 scope.go:117] "RemoveContainer" containerID="29c2b733a8c5cae8d48116aa58b128fe3cd775423db7b04b86a93edeb156faa6" Jan 28 19:10:21 crc kubenswrapper[4721]: E0128 19:10:21.630977 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dc69ebb_35f6_4a5f_ac8a_58747df158a1.slice/crio-conmon-1b637f6d8624f57d35a454bba31409ad62b8d0677ce803f6f78a4276ca7a3fe7.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:10:21 crc kubenswrapper[4721]: I0128 19:10:21.917691 4721 generic.go:334] "Generic (PLEG): container finished" podID="5dc69ebb-35f6-4a5f-ac8a-58747df158a1" containerID="1b637f6d8624f57d35a454bba31409ad62b8d0677ce803f6f78a4276ca7a3fe7" exitCode=0 Jan 28 19:10:21 crc kubenswrapper[4721]: I0128 19:10:21.917724 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" event={"ID":"5dc69ebb-35f6-4a5f-ac8a-58747df158a1","Type":"ContainerDied","Data":"1b637f6d8624f57d35a454bba31409ad62b8d0677ce803f6f78a4276ca7a3fe7"} Jan 28 19:10:22 crc kubenswrapper[4721]: I0128 19:10:22.040506 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-pwsl7"] Jan 28 19:10:22 crc kubenswrapper[4721]: I0128 19:10:22.055528 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-pwsl7"] Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.428391 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.508683 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-inventory\") pod \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.509017 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-ssh-key-openstack-edpm-ipam\") pod \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.509113 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-825kt\" (UniqueName: \"kubernetes.io/projected/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-kube-api-access-825kt\") pod \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\" (UID: \"5dc69ebb-35f6-4a5f-ac8a-58747df158a1\") " Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.513924 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-kube-api-access-825kt" (OuterVolumeSpecName: "kube-api-access-825kt") pod "5dc69ebb-35f6-4a5f-ac8a-58747df158a1" (UID: "5dc69ebb-35f6-4a5f-ac8a-58747df158a1"). InnerVolumeSpecName "kube-api-access-825kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.540039 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5dc69ebb-35f6-4a5f-ac8a-58747df158a1" (UID: "5dc69ebb-35f6-4a5f-ac8a-58747df158a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.542599 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-inventory" (OuterVolumeSpecName: "inventory") pod "5dc69ebb-35f6-4a5f-ac8a-58747df158a1" (UID: "5dc69ebb-35f6-4a5f-ac8a-58747df158a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.543691 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4" path="/var/lib/kubelet/pods/f7fc7453-7e1b-4e3f-bac3-f045c7b6a1c4/volumes" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.611844 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.611892 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-825kt\" (UniqueName: \"kubernetes.io/projected/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-kube-api-access-825kt\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.611906 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc69ebb-35f6-4a5f-ac8a-58747df158a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.939918 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" event={"ID":"5dc69ebb-35f6-4a5f-ac8a-58747df158a1","Type":"ContainerDied","Data":"e7dd60e145fe715c44828a2cc1f44d8930560e7ac176163edb5367ebc4969829"} Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.940313 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7dd60e145fe715c44828a2cc1f44d8930560e7ac176163edb5367ebc4969829" Jan 28 19:10:23 crc kubenswrapper[4721]: I0128 19:10:23.939976 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.106235 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2"] Jan 28 19:10:24 crc kubenswrapper[4721]: E0128 19:10:24.106795 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc69ebb-35f6-4a5f-ac8a-58747df158a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.106813 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc69ebb-35f6-4a5f-ac8a-58747df158a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.107030 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc69ebb-35f6-4a5f-ac8a-58747df158a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.108117 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.115071 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.115371 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.115530 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.115761 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.115906 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.116051 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.116240 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.121084 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.132561 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2"] Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.230886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.230936 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mw7\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-kube-api-access-57mw7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.230974 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231002 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231028 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231218 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231359 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231418 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231458 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231512 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231555 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231587 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231625 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.231732 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.333803 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.333855 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.333902 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.333987 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334021 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334042 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334211 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334259 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334293 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334326 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334352 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334392 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334408 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mw7\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-kube-api-access-57mw7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.334437 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.339425 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.339771 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.340085 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.340151 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.340283 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.340291 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.341118 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.342210 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.342517 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.344009 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.344550 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.345214 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.345292 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.352606 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mw7\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-kube-api-access-57mw7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-px4d2\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:24 crc kubenswrapper[4721]: I0128 19:10:24.476524 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:10:25 crc kubenswrapper[4721]: I0128 19:10:25.023684 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2"] Jan 28 19:10:25 crc kubenswrapper[4721]: I0128 19:10:25.961395 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" event={"ID":"e6d48255-8474-4c70-afc7-ddda7df2ff65","Type":"ContainerStarted","Data":"1658d1dce17e939e4ae78db5e67e2f84258e960e1c7e6ef9d74d52069e72425b"} Jan 28 19:10:25 crc kubenswrapper[4721]: I0128 19:10:25.962591 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" event={"ID":"e6d48255-8474-4c70-afc7-ddda7df2ff65","Type":"ContainerStarted","Data":"089db0b433a9dd5abab6c9c4ec50dde5f83d2741ba6ed1cccefce19c834cc5b0"} Jan 28 19:10:25 crc kubenswrapper[4721]: I0128 19:10:25.989624 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" podStartSLOduration=1.583337692 podStartE2EDuration="1.98959937s" podCreationTimestamp="2026-01-28 19:10:24 +0000 UTC" firstStartedPulling="2026-01-28 19:10:25.030856767 +0000 UTC m=+2190.756162327" lastFinishedPulling="2026-01-28 19:10:25.437118445 +0000 UTC m=+2191.162424005" observedRunningTime="2026-01-28 19:10:25.982017342 +0000 UTC m=+2191.707322922" watchObservedRunningTime="2026-01-28 19:10:25.98959937 +0000 UTC m=+2191.714904930" Jan 28 19:10:31 crc kubenswrapper[4721]: I0128 19:10:31.224613 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:10:31 crc kubenswrapper[4721]: I0128 19:10:31.225227 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:31 crc kubenswrapper[4721]: I0128 19:10:31.225280 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:10:31 crc kubenswrapper[4721]: I0128 19:10:31.226135 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:10:31 crc kubenswrapper[4721]: I0128 19:10:31.226208 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" gracePeriod=600 Jan 28 19:10:31 crc kubenswrapper[4721]: E0128 19:10:31.350675 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:10:32 crc kubenswrapper[4721]: I0128 19:10:32.033996 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" exitCode=0 Jan 28 19:10:32 crc kubenswrapper[4721]: I0128 19:10:32.034055 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47"} Jan 28 19:10:32 crc kubenswrapper[4721]: I0128 19:10:32.034110 4721 scope.go:117] "RemoveContainer" containerID="fae7b05413d2179da0c14f97f482c9d932655828a3eba9c206bbef238e41c9d7" Jan 28 19:10:32 crc kubenswrapper[4721]: I0128 19:10:32.034929 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:10:32 crc kubenswrapper[4721]: E0128 19:10:32.035478 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.193406 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qp944"] Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.197632 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.225353 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qp944"] Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.276386 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-utilities\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.276538 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmhm\" (UniqueName: \"kubernetes.io/projected/e268af2f-7665-456e-8a5e-7214e83f4b4a-kube-api-access-jqmhm\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.276578 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-catalog-content\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.378494 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-utilities\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.378619 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmhm\" (UniqueName: \"kubernetes.io/projected/e268af2f-7665-456e-8a5e-7214e83f4b4a-kube-api-access-jqmhm\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.378648 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-catalog-content\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.379100 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-utilities\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.379120 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-catalog-content\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.397264 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmhm\" (UniqueName: \"kubernetes.io/projected/e268af2f-7665-456e-8a5e-7214e83f4b4a-kube-api-access-jqmhm\") pod \"certified-operators-qp944\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:38 crc kubenswrapper[4721]: I0128 19:10:38.527048 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:39 crc kubenswrapper[4721]: I0128 19:10:39.083707 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qp944"] Jan 28 19:10:39 crc kubenswrapper[4721]: W0128 19:10:39.086968 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode268af2f_7665_456e_8a5e_7214e83f4b4a.slice/crio-a01ced449e56fd24929c5e6fd604eb7cc4e277901280772e5cbd740c445fcdd8 WatchSource:0}: Error finding container a01ced449e56fd24929c5e6fd604eb7cc4e277901280772e5cbd740c445fcdd8: Status 404 returned error can't find the container with id a01ced449e56fd24929c5e6fd604eb7cc4e277901280772e5cbd740c445fcdd8 Jan 28 19:10:39 crc kubenswrapper[4721]: I0128 19:10:39.106755 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerStarted","Data":"a01ced449e56fd24929c5e6fd604eb7cc4e277901280772e5cbd740c445fcdd8"} Jan 28 19:10:40 crc kubenswrapper[4721]: I0128 19:10:40.121581 4721 generic.go:334] "Generic (PLEG): container finished" podID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerID="9f36a41e67d37975dace5461ff7d5b0291eb712211a2ec180077c461022ead55" exitCode=0 Jan 28 19:10:40 crc kubenswrapper[4721]: I0128 19:10:40.121890 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerDied","Data":"9f36a41e67d37975dace5461ff7d5b0291eb712211a2ec180077c461022ead55"} Jan 28 19:10:41 crc kubenswrapper[4721]: I0128 19:10:41.134772 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerStarted","Data":"bd1c73a4ab2b522b88928f95c87981e6a4a64e9aa6ee8075441b76d3b4e0bde9"} Jan 28 19:10:42 crc kubenswrapper[4721]: I0128 19:10:42.146894 4721 generic.go:334] "Generic (PLEG): container finished" podID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerID="bd1c73a4ab2b522b88928f95c87981e6a4a64e9aa6ee8075441b76d3b4e0bde9" exitCode=0 Jan 28 19:10:42 crc kubenswrapper[4721]: I0128 19:10:42.146998 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerDied","Data":"bd1c73a4ab2b522b88928f95c87981e6a4a64e9aa6ee8075441b76d3b4e0bde9"} Jan 28 19:10:43 crc kubenswrapper[4721]: I0128 19:10:43.162287 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerStarted","Data":"8d8c27d92c4ee20caddfc704d9892cb1da561382c730560e3c061d2f772b5ad8"} Jan 28 19:10:43 crc kubenswrapper[4721]: I0128 19:10:43.184960 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qp944" podStartSLOduration=2.543158162 podStartE2EDuration="5.184935601s" podCreationTimestamp="2026-01-28 19:10:38 +0000 UTC" firstStartedPulling="2026-01-28 19:10:40.126472638 +0000 UTC m=+2205.851778198" lastFinishedPulling="2026-01-28 19:10:42.768250077 +0000 UTC m=+2208.493555637" observedRunningTime="2026-01-28 19:10:43.182108803 +0000 UTC m=+2208.907414383" watchObservedRunningTime="2026-01-28 19:10:43.184935601 +0000 UTC m=+2208.910241171" Jan 28 19:10:44 crc kubenswrapper[4721]: I0128 19:10:44.528757 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:10:44 crc kubenswrapper[4721]: E0128 19:10:44.529445 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:10:48 crc kubenswrapper[4721]: I0128 19:10:48.527214 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:48 crc kubenswrapper[4721]: I0128 19:10:48.527837 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:48 crc kubenswrapper[4721]: I0128 19:10:48.576189 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:49 crc kubenswrapper[4721]: I0128 19:10:49.289555 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:49 crc kubenswrapper[4721]: I0128 19:10:49.347131 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qp944"] Jan 28 19:10:51 crc kubenswrapper[4721]: I0128 19:10:51.250823 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qp944" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="registry-server" containerID="cri-o://8d8c27d92c4ee20caddfc704d9892cb1da561382c730560e3c061d2f772b5ad8" gracePeriod=2 Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.226543 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cf6sb"] Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.251357 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.261107 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf6sb"] Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.310960 4721 generic.go:334] "Generic (PLEG): container finished" podID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerID="8d8c27d92c4ee20caddfc704d9892cb1da561382c730560e3c061d2f772b5ad8" exitCode=0 Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.311016 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerDied","Data":"8d8c27d92c4ee20caddfc704d9892cb1da561382c730560e3c061d2f772b5ad8"} Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.326897 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bj5h\" (UniqueName: \"kubernetes.io/projected/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-kube-api-access-2bj5h\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.326984 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-catalog-content\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.327032 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-utilities\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.429232 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bj5h\" (UniqueName: \"kubernetes.io/projected/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-kube-api-access-2bj5h\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.429369 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-catalog-content\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.429436 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-utilities\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.429873 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-catalog-content\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.430030 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-utilities\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.453009 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bj5h\" (UniqueName: \"kubernetes.io/projected/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-kube-api-access-2bj5h\") pod \"redhat-marketplace-cf6sb\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:52 crc kubenswrapper[4721]: I0128 19:10:52.603403 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.184809 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.193863 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf6sb"] Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.252948 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-catalog-content\") pod \"e268af2f-7665-456e-8a5e-7214e83f4b4a\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.253111 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqmhm\" (UniqueName: \"kubernetes.io/projected/e268af2f-7665-456e-8a5e-7214e83f4b4a-kube-api-access-jqmhm\") pod \"e268af2f-7665-456e-8a5e-7214e83f4b4a\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.253211 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-utilities\") pod \"e268af2f-7665-456e-8a5e-7214e83f4b4a\" (UID: \"e268af2f-7665-456e-8a5e-7214e83f4b4a\") " Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.255893 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-utilities" (OuterVolumeSpecName: "utilities") pod "e268af2f-7665-456e-8a5e-7214e83f4b4a" (UID: "e268af2f-7665-456e-8a5e-7214e83f4b4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.259447 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e268af2f-7665-456e-8a5e-7214e83f4b4a-kube-api-access-jqmhm" (OuterVolumeSpecName: "kube-api-access-jqmhm") pod "e268af2f-7665-456e-8a5e-7214e83f4b4a" (UID: "e268af2f-7665-456e-8a5e-7214e83f4b4a"). InnerVolumeSpecName "kube-api-access-jqmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.305734 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e268af2f-7665-456e-8a5e-7214e83f4b4a" (UID: "e268af2f-7665-456e-8a5e-7214e83f4b4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.321098 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerStarted","Data":"aabafe40b74f59c34a00217484127ebe47ea498c350aa1f84882cf156e47a14c"} Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.323289 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qp944" event={"ID":"e268af2f-7665-456e-8a5e-7214e83f4b4a","Type":"ContainerDied","Data":"a01ced449e56fd24929c5e6fd604eb7cc4e277901280772e5cbd740c445fcdd8"} Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.323345 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qp944" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.323367 4721 scope.go:117] "RemoveContainer" containerID="8d8c27d92c4ee20caddfc704d9892cb1da561382c730560e3c061d2f772b5ad8" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.347472 4721 scope.go:117] "RemoveContainer" containerID="bd1c73a4ab2b522b88928f95c87981e6a4a64e9aa6ee8075441b76d3b4e0bde9" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.356791 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqmhm\" (UniqueName: \"kubernetes.io/projected/e268af2f-7665-456e-8a5e-7214e83f4b4a-kube-api-access-jqmhm\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.356857 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.356869 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e268af2f-7665-456e-8a5e-7214e83f4b4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.361507 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qp944"] Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.372457 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qp944"] Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.384193 4721 scope.go:117] "RemoveContainer" containerID="9f36a41e67d37975dace5461ff7d5b0291eb712211a2ec180077c461022ead55" Jan 28 19:10:53 crc kubenswrapper[4721]: I0128 19:10:53.542874 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" path="/var/lib/kubelet/pods/e268af2f-7665-456e-8a5e-7214e83f4b4a/volumes" Jan 28 19:10:54 crc kubenswrapper[4721]: I0128 19:10:54.334464 4721 generic.go:334] "Generic (PLEG): container finished" podID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerID="62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62" exitCode=0 Jan 28 19:10:54 crc kubenswrapper[4721]: I0128 19:10:54.334596 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerDied","Data":"62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62"} Jan 28 19:10:55 crc kubenswrapper[4721]: I0128 19:10:55.348898 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerStarted","Data":"289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895"} Jan 28 19:10:56 crc kubenswrapper[4721]: I0128 19:10:56.362578 4721 generic.go:334] "Generic (PLEG): container finished" podID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerID="289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895" exitCode=0 Jan 28 19:10:56 crc kubenswrapper[4721]: I0128 19:10:56.362689 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerDied","Data":"289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895"} Jan 28 19:10:57 crc kubenswrapper[4721]: I0128 19:10:57.376636 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerStarted","Data":"3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f"} Jan 28 19:10:57 crc kubenswrapper[4721]: I0128 19:10:57.398267 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cf6sb" podStartSLOduration=2.9395487559999998 podStartE2EDuration="5.39824179s" podCreationTimestamp="2026-01-28 19:10:52 +0000 UTC" firstStartedPulling="2026-01-28 19:10:54.337437562 +0000 UTC m=+2220.062743122" lastFinishedPulling="2026-01-28 19:10:56.796130596 +0000 UTC m=+2222.521436156" observedRunningTime="2026-01-28 19:10:57.39665532 +0000 UTC m=+2223.121960890" watchObservedRunningTime="2026-01-28 19:10:57.39824179 +0000 UTC m=+2223.123547350" Jan 28 19:10:59 crc kubenswrapper[4721]: I0128 19:10:59.529307 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:10:59 crc kubenswrapper[4721]: E0128 19:10:59.529910 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:11:00 crc kubenswrapper[4721]: I0128 19:11:00.409872 4721 generic.go:334] "Generic (PLEG): container finished" podID="e6d48255-8474-4c70-afc7-ddda7df2ff65" containerID="1658d1dce17e939e4ae78db5e67e2f84258e960e1c7e6ef9d74d52069e72425b" exitCode=0 Jan 28 19:11:00 crc kubenswrapper[4721]: I0128 19:11:00.409924 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" event={"ID":"e6d48255-8474-4c70-afc7-ddda7df2ff65","Type":"ContainerDied","Data":"1658d1dce17e939e4ae78db5e67e2f84258e960e1c7e6ef9d74d52069e72425b"} Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.015736 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.178199 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-bootstrap-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.178561 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-neutron-metadata-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.178784 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-libvirt-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.178912 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.179068 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-nova-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.179833 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180102 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mw7\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-kube-api-access-57mw7\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180222 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180328 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180536 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-telemetry-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180705 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180830 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-repo-setup-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.180973 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ovn-combined-ca-bundle\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.181071 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ssh-key-openstack-edpm-ipam\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.186731 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.187305 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.188510 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.188554 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.188807 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.189027 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.189207 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-kube-api-access-57mw7" (OuterVolumeSpecName: "kube-api-access-57mw7") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "kube-api-access-57mw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.190436 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.190810 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.191966 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.193412 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.197893 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: E0128 19:11:02.212548 4721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory podName:e6d48255-8474-4c70-afc7-ddda7df2ff65 nodeName:}" failed. No retries permitted until 2026-01-28 19:11:02.712512897 +0000 UTC m=+2228.437818457 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65") : error deleting /var/lib/kubelet/pods/e6d48255-8474-4c70-afc7-ddda7df2ff65/volume-subpaths: remove /var/lib/kubelet/pods/e6d48255-8474-4c70-afc7-ddda7df2ff65/volume-subpaths: no such file or directory Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.214852 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283101 4721 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283141 4721 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283154 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283179 4721 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283189 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mw7\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-kube-api-access-57mw7\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283200 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283210 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283224 4721 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283234 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6d48255-8474-4c70-afc7-ddda7df2ff65-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283244 4721 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283254 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283263 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.283272 4721 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.431971 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" event={"ID":"e6d48255-8474-4c70-afc7-ddda7df2ff65","Type":"ContainerDied","Data":"089db0b433a9dd5abab6c9c4ec50dde5f83d2741ba6ed1cccefce19c834cc5b0"} Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.432013 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-px4d2" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.432020 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089db0b433a9dd5abab6c9c4ec50dde5f83d2741ba6ed1cccefce19c834cc5b0" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.544158 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9"] Jan 28 19:11:02 crc kubenswrapper[4721]: E0128 19:11:02.544809 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d48255-8474-4c70-afc7-ddda7df2ff65" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.544832 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d48255-8474-4c70-afc7-ddda7df2ff65" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 19:11:02 crc kubenswrapper[4721]: E0128 19:11:02.544850 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="extract-content" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.544858 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="extract-content" Jan 28 19:11:02 crc kubenswrapper[4721]: E0128 19:11:02.544876 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="extract-utilities" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.544884 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="extract-utilities" Jan 28 19:11:02 crc kubenswrapper[4721]: E0128 19:11:02.544924 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.544935 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.545203 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e268af2f-7665-456e-8a5e-7214e83f4b4a" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.545251 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d48255-8474-4c70-afc7-ddda7df2ff65" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.546511 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.548833 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.557739 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9"] Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.603693 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.604090 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.660692 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.717519 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/445fc577-89a5-4f74-b7a4-65979c88af6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.719918 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6jg\" (UniqueName: \"kubernetes.io/projected/445fc577-89a5-4f74-b7a4-65979c88af6b-kube-api-access-qq6jg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.720217 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.720326 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.720694 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.822554 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory\") pod \"e6d48255-8474-4c70-afc7-ddda7df2ff65\" (UID: \"e6d48255-8474-4c70-afc7-ddda7df2ff65\") " Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.823789 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/445fc577-89a5-4f74-b7a4-65979c88af6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.823845 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6jg\" (UniqueName: \"kubernetes.io/projected/445fc577-89a5-4f74-b7a4-65979c88af6b-kube-api-access-qq6jg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.823953 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.824001 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.824151 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.826044 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/445fc577-89a5-4f74-b7a4-65979c88af6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.826334 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory" (OuterVolumeSpecName: "inventory") pod "e6d48255-8474-4c70-afc7-ddda7df2ff65" (UID: "e6d48255-8474-4c70-afc7-ddda7df2ff65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.829534 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.837397 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.846101 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6jg\" (UniqueName: \"kubernetes.io/projected/445fc577-89a5-4f74-b7a4-65979c88af6b-kube-api-access-qq6jg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.847080 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lrdl9\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.879583 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:11:02 crc kubenswrapper[4721]: I0128 19:11:02.926724 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6d48255-8474-4c70-afc7-ddda7df2ff65-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:03 crc kubenswrapper[4721]: I0128 19:11:03.397540 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9"] Jan 28 19:11:03 crc kubenswrapper[4721]: I0128 19:11:03.443803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" event={"ID":"445fc577-89a5-4f74-b7a4-65979c88af6b","Type":"ContainerStarted","Data":"566f71d74b1ee378db36563b70b135c4d21de5269e4c5010ce0eb78b89cf9009"} Jan 28 19:11:03 crc kubenswrapper[4721]: I0128 19:11:03.496496 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:11:03 crc kubenswrapper[4721]: I0128 19:11:03.553902 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf6sb"] Jan 28 19:11:04 crc kubenswrapper[4721]: I0128 19:11:04.457010 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" event={"ID":"445fc577-89a5-4f74-b7a4-65979c88af6b","Type":"ContainerStarted","Data":"3bd4e01a97724ae64a4278ccde4e111b1f8a61d8d854686a0a1f1bc8e808d06f"} Jan 28 19:11:04 crc kubenswrapper[4721]: I0128 19:11:04.491889 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" podStartSLOduration=2.076076997 podStartE2EDuration="2.491864225s" podCreationTimestamp="2026-01-28 19:11:02 +0000 UTC" firstStartedPulling="2026-01-28 19:11:03.409359533 +0000 UTC m=+2229.134665093" lastFinishedPulling="2026-01-28 19:11:03.825146751 +0000 UTC m=+2229.550452321" observedRunningTime="2026-01-28 19:11:04.479990542 +0000 UTC m=+2230.205296102" watchObservedRunningTime="2026-01-28 19:11:04.491864225 +0000 UTC m=+2230.217169785" Jan 28 19:11:05 crc kubenswrapper[4721]: I0128 19:11:05.466135 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cf6sb" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="registry-server" containerID="cri-o://3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f" gracePeriod=2 Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.028322 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.201419 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bj5h\" (UniqueName: \"kubernetes.io/projected/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-kube-api-access-2bj5h\") pod \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.201781 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-catalog-content\") pod \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.201883 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-utilities\") pod \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\" (UID: \"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6\") " Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.202945 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-utilities" (OuterVolumeSpecName: "utilities") pod "8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" (UID: "8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.212790 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-kube-api-access-2bj5h" (OuterVolumeSpecName: "kube-api-access-2bj5h") pod "8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" (UID: "8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6"). InnerVolumeSpecName "kube-api-access-2bj5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.226681 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" (UID: "8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.305343 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bj5h\" (UniqueName: \"kubernetes.io/projected/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-kube-api-access-2bj5h\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.305736 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.305750 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.487611 4721 generic.go:334] "Generic (PLEG): container finished" podID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerID="3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f" exitCode=0 Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.487674 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf6sb" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.487648 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerDied","Data":"3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f"} Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.489551 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf6sb" event={"ID":"8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6","Type":"ContainerDied","Data":"aabafe40b74f59c34a00217484127ebe47ea498c350aa1f84882cf156e47a14c"} Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.489653 4721 scope.go:117] "RemoveContainer" containerID="3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.515381 4721 scope.go:117] "RemoveContainer" containerID="289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.540148 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf6sb"] Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.550951 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf6sb"] Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.568520 4721 scope.go:117] "RemoveContainer" containerID="62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.606850 4721 scope.go:117] "RemoveContainer" containerID="3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f" Jan 28 19:11:06 crc kubenswrapper[4721]: E0128 19:11:06.607667 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f\": container with ID starting with 3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f not found: ID does not exist" containerID="3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.607702 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f"} err="failed to get container status \"3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f\": rpc error: code = NotFound desc = could not find container \"3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f\": container with ID starting with 3d9214b942a56b48d1fbc750fd46e9857a49cafe83e67797cf572742e346040f not found: ID does not exist" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.607727 4721 scope.go:117] "RemoveContainer" containerID="289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895" Jan 28 19:11:06 crc kubenswrapper[4721]: E0128 19:11:06.608298 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895\": container with ID starting with 289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895 not found: ID does not exist" containerID="289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.608324 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895"} err="failed to get container status \"289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895\": rpc error: code = NotFound desc = could not find container \"289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895\": container with ID starting with 289415925e84720be8b7a579e33aa07453addd46d90f08e21245a97919af7895 not found: ID does not exist" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.608337 4721 scope.go:117] "RemoveContainer" containerID="62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62" Jan 28 19:11:06 crc kubenswrapper[4721]: E0128 19:11:06.608585 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62\": container with ID starting with 62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62 not found: ID does not exist" containerID="62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62" Jan 28 19:11:06 crc kubenswrapper[4721]: I0128 19:11:06.608602 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62"} err="failed to get container status \"62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62\": rpc error: code = NotFound desc = could not find container \"62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62\": container with ID starting with 62ff72694260663233d661c87d764e22b04d52cf167bc24352cfa8dde6666d62 not found: ID does not exist" Jan 28 19:11:07 crc kubenswrapper[4721]: I0128 19:11:07.541589 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" path="/var/lib/kubelet/pods/8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6/volumes" Jan 28 19:11:13 crc kubenswrapper[4721]: I0128 19:11:13.530446 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:11:13 crc kubenswrapper[4721]: E0128 19:11:13.532815 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:11:20 crc kubenswrapper[4721]: I0128 19:11:20.649029 4721 scope.go:117] "RemoveContainer" containerID="c9347359b05c7170adeef3caaebd6a81cc6189a67ee6aae1b082059a009b3697" Jan 28 19:11:27 crc kubenswrapper[4721]: I0128 19:11:27.528777 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:11:27 crc kubenswrapper[4721]: E0128 19:11:27.529695 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:11:40 crc kubenswrapper[4721]: I0128 19:11:40.529070 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:11:40 crc kubenswrapper[4721]: E0128 19:11:40.531069 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:11:53 crc kubenswrapper[4721]: I0128 19:11:53.528839 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:11:53 crc kubenswrapper[4721]: E0128 19:11:53.529767 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:12:06 crc kubenswrapper[4721]: I0128 19:12:06.087486 4721 generic.go:334] "Generic (PLEG): container finished" podID="445fc577-89a5-4f74-b7a4-65979c88af6b" containerID="3bd4e01a97724ae64a4278ccde4e111b1f8a61d8d854686a0a1f1bc8e808d06f" exitCode=0 Jan 28 19:12:06 crc kubenswrapper[4721]: I0128 19:12:06.087575 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" event={"ID":"445fc577-89a5-4f74-b7a4-65979c88af6b","Type":"ContainerDied","Data":"3bd4e01a97724ae64a4278ccde4e111b1f8a61d8d854686a0a1f1bc8e808d06f"} Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.529474 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:12:07 crc kubenswrapper[4721]: E0128 19:12:07.530212 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.693136 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.770782 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-inventory\") pod \"445fc577-89a5-4f74-b7a4-65979c88af6b\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.770900 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/445fc577-89a5-4f74-b7a4-65979c88af6b-ovncontroller-config-0\") pod \"445fc577-89a5-4f74-b7a4-65979c88af6b\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.771239 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq6jg\" (UniqueName: \"kubernetes.io/projected/445fc577-89a5-4f74-b7a4-65979c88af6b-kube-api-access-qq6jg\") pod \"445fc577-89a5-4f74-b7a4-65979c88af6b\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.771483 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ssh-key-openstack-edpm-ipam\") pod \"445fc577-89a5-4f74-b7a4-65979c88af6b\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.771559 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ovn-combined-ca-bundle\") pod \"445fc577-89a5-4f74-b7a4-65979c88af6b\" (UID: \"445fc577-89a5-4f74-b7a4-65979c88af6b\") " Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.781816 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "445fc577-89a5-4f74-b7a4-65979c88af6b" (UID: "445fc577-89a5-4f74-b7a4-65979c88af6b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.781920 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445fc577-89a5-4f74-b7a4-65979c88af6b-kube-api-access-qq6jg" (OuterVolumeSpecName: "kube-api-access-qq6jg") pod "445fc577-89a5-4f74-b7a4-65979c88af6b" (UID: "445fc577-89a5-4f74-b7a4-65979c88af6b"). InnerVolumeSpecName "kube-api-access-qq6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.808503 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445fc577-89a5-4f74-b7a4-65979c88af6b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "445fc577-89a5-4f74-b7a4-65979c88af6b" (UID: "445fc577-89a5-4f74-b7a4-65979c88af6b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.809074 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-inventory" (OuterVolumeSpecName: "inventory") pod "445fc577-89a5-4f74-b7a4-65979c88af6b" (UID: "445fc577-89a5-4f74-b7a4-65979c88af6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.818436 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "445fc577-89a5-4f74-b7a4-65979c88af6b" (UID: "445fc577-89a5-4f74-b7a4-65979c88af6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.875726 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq6jg\" (UniqueName: \"kubernetes.io/projected/445fc577-89a5-4f74-b7a4-65979c88af6b-kube-api-access-qq6jg\") on node \"crc\" DevicePath \"\"" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.875778 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.875793 4721 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.875806 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/445fc577-89a5-4f74-b7a4-65979c88af6b-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:12:07 crc kubenswrapper[4721]: I0128 19:12:07.875817 4721 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/445fc577-89a5-4f74-b7a4-65979c88af6b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.111285 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" event={"ID":"445fc577-89a5-4f74-b7a4-65979c88af6b","Type":"ContainerDied","Data":"566f71d74b1ee378db36563b70b135c4d21de5269e4c5010ce0eb78b89cf9009"} Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.111340 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="566f71d74b1ee378db36563b70b135c4d21de5269e4c5010ce0eb78b89cf9009" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.111390 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lrdl9" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.223191 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4"] Jan 28 19:12:08 crc kubenswrapper[4721]: E0128 19:12:08.223755 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="registry-server" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.223776 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="registry-server" Jan 28 19:12:08 crc kubenswrapper[4721]: E0128 19:12:08.223793 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445fc577-89a5-4f74-b7a4-65979c88af6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.223800 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="445fc577-89a5-4f74-b7a4-65979c88af6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 19:12:08 crc kubenswrapper[4721]: E0128 19:12:08.223832 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="extract-utilities" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.223839 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="extract-utilities" Jan 28 19:12:08 crc kubenswrapper[4721]: E0128 19:12:08.223852 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="extract-content" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.223858 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="extract-content" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.224087 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7b0dd7-af22-4fdf-a712-e0e4a7541ef6" containerName="registry-server" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.224118 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="445fc577-89a5-4f74-b7a4-65979c88af6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.225108 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.227629 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.227629 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.227637 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.231218 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.231415 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.231689 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.240646 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4"] Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.284722 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.285044 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.285280 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.285518 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlm9\" (UniqueName: \"kubernetes.io/projected/7004522f-8584-4fca-851b-1d9f9195cb0d-kube-api-access-2rlm9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.285664 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.286125 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.389640 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.390099 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.390278 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.391082 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.391199 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.391389 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlm9\" (UniqueName: \"kubernetes.io/projected/7004522f-8584-4fca-851b-1d9f9195cb0d-kube-api-access-2rlm9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.395099 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.395279 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.395287 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.399155 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.400494 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.411517 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlm9\" (UniqueName: \"kubernetes.io/projected/7004522f-8584-4fca-851b-1d9f9195cb0d-kube-api-access-2rlm9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:08 crc kubenswrapper[4721]: I0128 19:12:08.553257 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:12:09 crc kubenswrapper[4721]: I0128 19:12:09.101756 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:12:09 crc kubenswrapper[4721]: I0128 19:12:09.108664 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4"] Jan 28 19:12:09 crc kubenswrapper[4721]: I0128 19:12:09.132683 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" event={"ID":"7004522f-8584-4fca-851b-1d9f9195cb0d","Type":"ContainerStarted","Data":"c419c358e7e7e9c67f52c9407d3647ddb75ad2cf139627279ae692c4a0add32c"} Jan 28 19:12:11 crc kubenswrapper[4721]: I0128 19:12:11.155502 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" event={"ID":"7004522f-8584-4fca-851b-1d9f9195cb0d","Type":"ContainerStarted","Data":"b5d49e3b14942c1865f14deb3b2101bdaed5e026b9965d95638909e359db0ee5"} Jan 28 19:12:22 crc kubenswrapper[4721]: I0128 19:12:22.528494 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:12:22 crc kubenswrapper[4721]: E0128 19:12:22.529339 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:12:35 crc kubenswrapper[4721]: I0128 19:12:35.537658 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:12:35 crc kubenswrapper[4721]: E0128 19:12:35.538662 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:12:46 crc kubenswrapper[4721]: I0128 19:12:46.529849 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:12:46 crc kubenswrapper[4721]: E0128 19:12:46.530617 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:12:59 crc kubenswrapper[4721]: I0128 19:12:59.530211 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:12:59 crc kubenswrapper[4721]: E0128 19:12:59.531292 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:12:59 crc kubenswrapper[4721]: I0128 19:12:59.652953 4721 generic.go:334] "Generic (PLEG): container finished" podID="7004522f-8584-4fca-851b-1d9f9195cb0d" containerID="b5d49e3b14942c1865f14deb3b2101bdaed5e026b9965d95638909e359db0ee5" exitCode=0 Jan 28 19:12:59 crc kubenswrapper[4721]: I0128 19:12:59.653013 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" event={"ID":"7004522f-8584-4fca-851b-1d9f9195cb0d","Type":"ContainerDied","Data":"b5d49e3b14942c1865f14deb3b2101bdaed5e026b9965d95638909e359db0ee5"} Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.184812 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.286048 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-inventory\") pod \"7004522f-8584-4fca-851b-1d9f9195cb0d\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.286345 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-metadata-combined-ca-bundle\") pod \"7004522f-8584-4fca-851b-1d9f9195cb0d\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.286453 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-nova-metadata-neutron-config-0\") pod \"7004522f-8584-4fca-851b-1d9f9195cb0d\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.286709 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7004522f-8584-4fca-851b-1d9f9195cb0d\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.286780 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-ssh-key-openstack-edpm-ipam\") pod \"7004522f-8584-4fca-851b-1d9f9195cb0d\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.286842 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rlm9\" (UniqueName: \"kubernetes.io/projected/7004522f-8584-4fca-851b-1d9f9195cb0d-kube-api-access-2rlm9\") pod \"7004522f-8584-4fca-851b-1d9f9195cb0d\" (UID: \"7004522f-8584-4fca-851b-1d9f9195cb0d\") " Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.292964 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7004522f-8584-4fca-851b-1d9f9195cb0d-kube-api-access-2rlm9" (OuterVolumeSpecName: "kube-api-access-2rlm9") pod "7004522f-8584-4fca-851b-1d9f9195cb0d" (UID: "7004522f-8584-4fca-851b-1d9f9195cb0d"). InnerVolumeSpecName "kube-api-access-2rlm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.298913 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7004522f-8584-4fca-851b-1d9f9195cb0d" (UID: "7004522f-8584-4fca-851b-1d9f9195cb0d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.319399 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-inventory" (OuterVolumeSpecName: "inventory") pod "7004522f-8584-4fca-851b-1d9f9195cb0d" (UID: "7004522f-8584-4fca-851b-1d9f9195cb0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.320227 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7004522f-8584-4fca-851b-1d9f9195cb0d" (UID: "7004522f-8584-4fca-851b-1d9f9195cb0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.321025 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7004522f-8584-4fca-851b-1d9f9195cb0d" (UID: "7004522f-8584-4fca-851b-1d9f9195cb0d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.322561 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7004522f-8584-4fca-851b-1d9f9195cb0d" (UID: "7004522f-8584-4fca-851b-1d9f9195cb0d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.389980 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.390368 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rlm9\" (UniqueName: \"kubernetes.io/projected/7004522f-8584-4fca-851b-1d9f9195cb0d-kube-api-access-2rlm9\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.390385 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.390401 4721 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.390417 4721 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.390431 4721 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7004522f-8584-4fca-851b-1d9f9195cb0d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.706815 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" event={"ID":"7004522f-8584-4fca-851b-1d9f9195cb0d","Type":"ContainerDied","Data":"c419c358e7e7e9c67f52c9407d3647ddb75ad2cf139627279ae692c4a0add32c"} Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.706880 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c419c358e7e7e9c67f52c9407d3647ddb75ad2cf139627279ae692c4a0add32c" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.706992 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.805596 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh"] Jan 28 19:13:01 crc kubenswrapper[4721]: E0128 19:13:01.806083 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7004522f-8584-4fca-851b-1d9f9195cb0d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.806102 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="7004522f-8584-4fca-851b-1d9f9195cb0d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.806419 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="7004522f-8584-4fca-851b-1d9f9195cb0d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.807311 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.810495 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.810503 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.810689 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.810752 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.810840 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.832233 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh"] Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.905736 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.905846 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq67n\" (UniqueName: \"kubernetes.io/projected/349859e1-1716-4304-9352-b9caa4c046be-kube-api-access-cq67n\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.905990 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.906029 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:01 crc kubenswrapper[4721]: I0128 19:13:01.906270 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.008479 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.008848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq67n\" (UniqueName: \"kubernetes.io/projected/349859e1-1716-4304-9352-b9caa4c046be-kube-api-access-cq67n\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.009017 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.009110 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.009207 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.012997 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.013820 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.013956 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.014196 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.027239 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq67n\" (UniqueName: \"kubernetes.io/projected/349859e1-1716-4304-9352-b9caa4c046be-kube-api-access-cq67n\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s49zh\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.130471 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.672065 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh"] Jan 28 19:13:02 crc kubenswrapper[4721]: I0128 19:13:02.720217 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" event={"ID":"349859e1-1716-4304-9352-b9caa4c046be","Type":"ContainerStarted","Data":"a188fd4bc9c52a87b9c1a30c06cd752df2a6e1b82ac524f67cef08234daa3f36"} Jan 28 19:13:03 crc kubenswrapper[4721]: I0128 19:13:03.738276 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" event={"ID":"349859e1-1716-4304-9352-b9caa4c046be","Type":"ContainerStarted","Data":"85c89277d8eca886b9e0a32e1d0345ee0d3d8e6b6ecc0cc5fe572b02bf7375dc"} Jan 28 19:13:03 crc kubenswrapper[4721]: I0128 19:13:03.762150 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" podStartSLOduration=2.298732002 podStartE2EDuration="2.762090798s" podCreationTimestamp="2026-01-28 19:13:01 +0000 UTC" firstStartedPulling="2026-01-28 19:13:02.674551553 +0000 UTC m=+2348.399857123" lastFinishedPulling="2026-01-28 19:13:03.137910359 +0000 UTC m=+2348.863215919" observedRunningTime="2026-01-28 19:13:03.755774781 +0000 UTC m=+2349.481080351" watchObservedRunningTime="2026-01-28 19:13:03.762090798 +0000 UTC m=+2349.487396378" Jan 28 19:13:10 crc kubenswrapper[4721]: I0128 19:13:10.530346 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:13:10 crc kubenswrapper[4721]: E0128 19:13:10.532078 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:13:25 crc kubenswrapper[4721]: I0128 19:13:25.538048 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:13:25 crc kubenswrapper[4721]: E0128 19:13:25.539133 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:13:38 crc kubenswrapper[4721]: I0128 19:13:38.528656 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:13:38 crc kubenswrapper[4721]: E0128 19:13:38.529640 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:13:52 crc kubenswrapper[4721]: I0128 19:13:52.529515 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:13:52 crc kubenswrapper[4721]: E0128 19:13:52.530934 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:14:04 crc kubenswrapper[4721]: I0128 19:14:04.529072 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:14:04 crc kubenswrapper[4721]: E0128 19:14:04.530583 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:14:17 crc kubenswrapper[4721]: I0128 19:14:17.529050 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:14:17 crc kubenswrapper[4721]: E0128 19:14:17.529937 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:14:32 crc kubenswrapper[4721]: I0128 19:14:32.528745 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:14:32 crc kubenswrapper[4721]: E0128 19:14:32.529606 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:14:43 crc kubenswrapper[4721]: I0128 19:14:43.529239 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:14:43 crc kubenswrapper[4721]: E0128 19:14:43.530190 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:14:56 crc kubenswrapper[4721]: I0128 19:14:56.529314 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:14:56 crc kubenswrapper[4721]: E0128 19:14:56.530304 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.157822 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq"] Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.160150 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.163671 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.172010 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.176396 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq"] Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.251158 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-secret-volume\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.251613 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpkph\" (UniqueName: \"kubernetes.io/projected/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-kube-api-access-hpkph\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.251684 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-config-volume\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.353992 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpkph\" (UniqueName: \"kubernetes.io/projected/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-kube-api-access-hpkph\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.354387 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-config-volume\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.354536 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-secret-volume\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.355499 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-config-volume\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.360414 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-secret-volume\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.371692 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpkph\" (UniqueName: \"kubernetes.io/projected/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-kube-api-access-hpkph\") pod \"collect-profiles-29493795-gj5pq\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.487436 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:00 crc kubenswrapper[4721]: I0128 19:15:00.980752 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq"] Jan 28 19:15:01 crc kubenswrapper[4721]: I0128 19:15:01.017913 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" event={"ID":"d9d24793-50b0-4807-ba64-5ee25bf8e5ff","Type":"ContainerStarted","Data":"c70396c8fdc0ed7928aa94ac1f653101e2221726797de8fec9979eac030cea41"} Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.046878 4721 generic.go:334] "Generic (PLEG): container finished" podID="d9d24793-50b0-4807-ba64-5ee25bf8e5ff" containerID="ad307e0da1c461a07216b36647690de937bfa8ed6dd2664c5681f65d9e371348" exitCode=0 Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.047772 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" event={"ID":"d9d24793-50b0-4807-ba64-5ee25bf8e5ff","Type":"ContainerDied","Data":"ad307e0da1c461a07216b36647690de937bfa8ed6dd2664c5681f65d9e371348"} Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.486814 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jbcxn"] Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.489861 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.498746 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbcxn"] Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.628866 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-utilities\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.628964 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-catalog-content\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.629068 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxt2\" (UniqueName: \"kubernetes.io/projected/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-kube-api-access-ksxt2\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.731831 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-utilities\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.731947 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-catalog-content\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.732061 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxt2\" (UniqueName: \"kubernetes.io/projected/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-kube-api-access-ksxt2\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.732442 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-utilities\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.732442 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-catalog-content\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.758771 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxt2\" (UniqueName: \"kubernetes.io/projected/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-kube-api-access-ksxt2\") pod \"community-operators-jbcxn\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:02 crc kubenswrapper[4721]: I0128 19:15:02.820723 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.410884 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbcxn"] Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.591008 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.654659 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-config-volume\") pod \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.654781 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-secret-volume\") pod \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.654876 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpkph\" (UniqueName: \"kubernetes.io/projected/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-kube-api-access-hpkph\") pod \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\" (UID: \"d9d24793-50b0-4807-ba64-5ee25bf8e5ff\") " Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.656940 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9d24793-50b0-4807-ba64-5ee25bf8e5ff" (UID: "d9d24793-50b0-4807-ba64-5ee25bf8e5ff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.665436 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-kube-api-access-hpkph" (OuterVolumeSpecName: "kube-api-access-hpkph") pod "d9d24793-50b0-4807-ba64-5ee25bf8e5ff" (UID: "d9d24793-50b0-4807-ba64-5ee25bf8e5ff"). InnerVolumeSpecName "kube-api-access-hpkph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.665470 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9d24793-50b0-4807-ba64-5ee25bf8e5ff" (UID: "d9d24793-50b0-4807-ba64-5ee25bf8e5ff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.758355 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.758393 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpkph\" (UniqueName: \"kubernetes.io/projected/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-kube-api-access-hpkph\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4721]: I0128 19:15:03.758406 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d24793-50b0-4807-ba64-5ee25bf8e5ff-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.092277 4721 generic.go:334] "Generic (PLEG): container finished" podID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerID="b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081" exitCode=0 Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.092342 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerDied","Data":"b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081"} Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.092683 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerStarted","Data":"22fe01710b3e6ad55e6577c611c448abfad9b56454bbc891466f8ef2dd08ec03"} Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.095321 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" event={"ID":"d9d24793-50b0-4807-ba64-5ee25bf8e5ff","Type":"ContainerDied","Data":"c70396c8fdc0ed7928aa94ac1f653101e2221726797de8fec9979eac030cea41"} Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.095347 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70396c8fdc0ed7928aa94ac1f653101e2221726797de8fec9979eac030cea41" Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.095367 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-gj5pq" Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.679472 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2"] Jan 28 19:15:04 crc kubenswrapper[4721]: I0128 19:15:04.688582 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-pb8r2"] Jan 28 19:15:05 crc kubenswrapper[4721]: I0128 19:15:05.109940 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerStarted","Data":"cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5"} Jan 28 19:15:05 crc kubenswrapper[4721]: I0128 19:15:05.544340 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db74784c-afbc-482a-8e2d-18c5bb898a9b" path="/var/lib/kubelet/pods/db74784c-afbc-482a-8e2d-18c5bb898a9b/volumes" Jan 28 19:15:07 crc kubenswrapper[4721]: I0128 19:15:07.155793 4721 generic.go:334] "Generic (PLEG): container finished" podID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerID="cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5" exitCode=0 Jan 28 19:15:07 crc kubenswrapper[4721]: I0128 19:15:07.155821 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerDied","Data":"cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5"} Jan 28 19:15:08 crc kubenswrapper[4721]: I0128 19:15:08.169715 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerStarted","Data":"5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81"} Jan 28 19:15:08 crc kubenswrapper[4721]: I0128 19:15:08.188354 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jbcxn" podStartSLOduration=2.732587862 podStartE2EDuration="6.18832605s" podCreationTimestamp="2026-01-28 19:15:02 +0000 UTC" firstStartedPulling="2026-01-28 19:15:04.094800092 +0000 UTC m=+2469.820105652" lastFinishedPulling="2026-01-28 19:15:07.55053829 +0000 UTC m=+2473.275843840" observedRunningTime="2026-01-28 19:15:08.187747161 +0000 UTC m=+2473.913052721" watchObservedRunningTime="2026-01-28 19:15:08.18832605 +0000 UTC m=+2473.913631610" Jan 28 19:15:10 crc kubenswrapper[4721]: I0128 19:15:10.529592 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:15:10 crc kubenswrapper[4721]: E0128 19:15:10.530217 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:15:12 crc kubenswrapper[4721]: I0128 19:15:12.820987 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:12 crc kubenswrapper[4721]: I0128 19:15:12.821383 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:12 crc kubenswrapper[4721]: I0128 19:15:12.895271 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:13 crc kubenswrapper[4721]: I0128 19:15:13.275810 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:13 crc kubenswrapper[4721]: I0128 19:15:13.328388 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbcxn"] Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.244375 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jbcxn" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="registry-server" containerID="cri-o://5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81" gracePeriod=2 Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.742902 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.870449 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-catalog-content\") pod \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.870650 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-utilities\") pod \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.871027 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksxt2\" (UniqueName: \"kubernetes.io/projected/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-kube-api-access-ksxt2\") pod \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\" (UID: \"591c9edf-d741-4c18-b5f0-8ceaae46e3ff\") " Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.871641 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-utilities" (OuterVolumeSpecName: "utilities") pod "591c9edf-d741-4c18-b5f0-8ceaae46e3ff" (UID: "591c9edf-d741-4c18-b5f0-8ceaae46e3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.872784 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.876937 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-kube-api-access-ksxt2" (OuterVolumeSpecName: "kube-api-access-ksxt2") pod "591c9edf-d741-4c18-b5f0-8ceaae46e3ff" (UID: "591c9edf-d741-4c18-b5f0-8ceaae46e3ff"). InnerVolumeSpecName "kube-api-access-ksxt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.920584 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "591c9edf-d741-4c18-b5f0-8ceaae46e3ff" (UID: "591c9edf-d741-4c18-b5f0-8ceaae46e3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.975337 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksxt2\" (UniqueName: \"kubernetes.io/projected/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-kube-api-access-ksxt2\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:15 crc kubenswrapper[4721]: I0128 19:15:15.975393 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/591c9edf-d741-4c18-b5f0-8ceaae46e3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.256576 4721 generic.go:334] "Generic (PLEG): container finished" podID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerID="5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81" exitCode=0 Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.256627 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerDied","Data":"5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81"} Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.256666 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbcxn" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.256688 4721 scope.go:117] "RemoveContainer" containerID="5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.256674 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbcxn" event={"ID":"591c9edf-d741-4c18-b5f0-8ceaae46e3ff","Type":"ContainerDied","Data":"22fe01710b3e6ad55e6577c611c448abfad9b56454bbc891466f8ef2dd08ec03"} Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.294996 4721 scope.go:117] "RemoveContainer" containerID="cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.297339 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbcxn"] Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.314503 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jbcxn"] Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.325902 4721 scope.go:117] "RemoveContainer" containerID="b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.368674 4721 scope.go:117] "RemoveContainer" containerID="5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81" Jan 28 19:15:16 crc kubenswrapper[4721]: E0128 19:15:16.370703 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81\": container with ID starting with 5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81 not found: ID does not exist" containerID="5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.370784 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81"} err="failed to get container status \"5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81\": rpc error: code = NotFound desc = could not find container \"5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81\": container with ID starting with 5bcee1277b37d6f1b2cb32fce7b1510caf5715e61f158305222924728ec2ae81 not found: ID does not exist" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.370826 4721 scope.go:117] "RemoveContainer" containerID="cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5" Jan 28 19:15:16 crc kubenswrapper[4721]: E0128 19:15:16.371384 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5\": container with ID starting with cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5 not found: ID does not exist" containerID="cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.371462 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5"} err="failed to get container status \"cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5\": rpc error: code = NotFound desc = could not find container \"cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5\": container with ID starting with cb792bb3ccf5dcdd466a1b60e59bd761d7efdc36bba139ec5adbbb8ef2a5f1a5 not found: ID does not exist" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.371526 4721 scope.go:117] "RemoveContainer" containerID="b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081" Jan 28 19:15:16 crc kubenswrapper[4721]: E0128 19:15:16.371913 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081\": container with ID starting with b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081 not found: ID does not exist" containerID="b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081" Jan 28 19:15:16 crc kubenswrapper[4721]: I0128 19:15:16.371943 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081"} err="failed to get container status \"b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081\": rpc error: code = NotFound desc = could not find container \"b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081\": container with ID starting with b72a15713786e53e0b913f252ef1e87648a082947da136b5978b4c4a2c985081 not found: ID does not exist" Jan 28 19:15:17 crc kubenswrapper[4721]: I0128 19:15:17.539929 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" path="/var/lib/kubelet/pods/591c9edf-d741-4c18-b5f0-8ceaae46e3ff/volumes" Jan 28 19:15:20 crc kubenswrapper[4721]: I0128 19:15:20.828944 4721 scope.go:117] "RemoveContainer" containerID="b7dc2c4ad7e11b8d1201093374a25307c75a3b135d8dfa9b07bbafd2a30f0fed" Jan 28 19:15:22 crc kubenswrapper[4721]: I0128 19:15:22.529972 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:15:22 crc kubenswrapper[4721]: E0128 19:15:22.530469 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:15:36 crc kubenswrapper[4721]: I0128 19:15:36.528980 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:15:37 crc kubenswrapper[4721]: I0128 19:15:37.474657 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"c08c37963e8ba05c6aa8626566a97d013c31084d8568750d6fc1a68a1adf0f7c"} Jan 28 19:17:15 crc kubenswrapper[4721]: I0128 19:17:15.554617 4721 generic.go:334] "Generic (PLEG): container finished" podID="349859e1-1716-4304-9352-b9caa4c046be" containerID="85c89277d8eca886b9e0a32e1d0345ee0d3d8e6b6ecc0cc5fe572b02bf7375dc" exitCode=0 Jan 28 19:17:15 crc kubenswrapper[4721]: I0128 19:17:15.554698 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" event={"ID":"349859e1-1716-4304-9352-b9caa4c046be","Type":"ContainerDied","Data":"85c89277d8eca886b9e0a32e1d0345ee0d3d8e6b6ecc0cc5fe572b02bf7375dc"} Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.140525 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.311800 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-secret-0\") pod \"349859e1-1716-4304-9352-b9caa4c046be\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.312484 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-inventory\") pod \"349859e1-1716-4304-9352-b9caa4c046be\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.312555 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-ssh-key-openstack-edpm-ipam\") pod \"349859e1-1716-4304-9352-b9caa4c046be\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.312771 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq67n\" (UniqueName: \"kubernetes.io/projected/349859e1-1716-4304-9352-b9caa4c046be-kube-api-access-cq67n\") pod \"349859e1-1716-4304-9352-b9caa4c046be\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.312819 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-combined-ca-bundle\") pod \"349859e1-1716-4304-9352-b9caa4c046be\" (UID: \"349859e1-1716-4304-9352-b9caa4c046be\") " Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.318808 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349859e1-1716-4304-9352-b9caa4c046be-kube-api-access-cq67n" (OuterVolumeSpecName: "kube-api-access-cq67n") pod "349859e1-1716-4304-9352-b9caa4c046be" (UID: "349859e1-1716-4304-9352-b9caa4c046be"). InnerVolumeSpecName "kube-api-access-cq67n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.334095 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "349859e1-1716-4304-9352-b9caa4c046be" (UID: "349859e1-1716-4304-9352-b9caa4c046be"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.348499 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-inventory" (OuterVolumeSpecName: "inventory") pod "349859e1-1716-4304-9352-b9caa4c046be" (UID: "349859e1-1716-4304-9352-b9caa4c046be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.351005 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "349859e1-1716-4304-9352-b9caa4c046be" (UID: "349859e1-1716-4304-9352-b9caa4c046be"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.351569 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "349859e1-1716-4304-9352-b9caa4c046be" (UID: "349859e1-1716-4304-9352-b9caa4c046be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.415749 4721 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.415815 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.415827 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.415841 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq67n\" (UniqueName: \"kubernetes.io/projected/349859e1-1716-4304-9352-b9caa4c046be-kube-api-access-cq67n\") on node \"crc\" DevicePath \"\"" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.415850 4721 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349859e1-1716-4304-9352-b9caa4c046be-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.578036 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" event={"ID":"349859e1-1716-4304-9352-b9caa4c046be","Type":"ContainerDied","Data":"a188fd4bc9c52a87b9c1a30c06cd752df2a6e1b82ac524f67cef08234daa3f36"} Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.578320 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a188fd4bc9c52a87b9c1a30c06cd752df2a6e1b82ac524f67cef08234daa3f36" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.578412 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s49zh" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.676662 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv"] Jan 28 19:17:17 crc kubenswrapper[4721]: E0128 19:17:17.677310 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349859e1-1716-4304-9352-b9caa4c046be" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.682284 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="349859e1-1716-4304-9352-b9caa4c046be" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 19:17:17 crc kubenswrapper[4721]: E0128 19:17:17.682354 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="registry-server" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.682364 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="registry-server" Jan 28 19:17:17 crc kubenswrapper[4721]: E0128 19:17:17.682402 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d24793-50b0-4807-ba64-5ee25bf8e5ff" containerName="collect-profiles" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.682410 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d24793-50b0-4807-ba64-5ee25bf8e5ff" containerName="collect-profiles" Jan 28 19:17:17 crc kubenswrapper[4721]: E0128 19:17:17.682465 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="extract-content" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.682474 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="extract-content" Jan 28 19:17:17 crc kubenswrapper[4721]: E0128 19:17:17.682490 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="extract-utilities" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.682499 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="extract-utilities" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.682990 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="591c9edf-d741-4c18-b5f0-8ceaae46e3ff" containerName="registry-server" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.683030 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="349859e1-1716-4304-9352-b9caa4c046be" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.683054 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d24793-50b0-4807-ba64-5ee25bf8e5ff" containerName="collect-profiles" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.684233 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.689383 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.689397 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.689606 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7sc4s" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.689854 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.690340 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.690446 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.691156 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.700924 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv"] Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.824725 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.824802 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.824882 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.824959 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.824989 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpw2n\" (UniqueName: \"kubernetes.io/projected/8dcae945-3742-46b5-b6ac-c8ff95e2946e-kube-api-access-rpw2n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.825031 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.825053 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.825593 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.825794 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.928259 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.928361 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.928432 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.928472 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.928496 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.928537 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.929251 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.929318 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpw2n\" (UniqueName: \"kubernetes.io/projected/8dcae945-3742-46b5-b6ac-c8ff95e2946e-kube-api-access-rpw2n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.929463 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.929508 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.933149 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.933732 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.933815 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.933896 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.934162 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.934680 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.938670 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:17 crc kubenswrapper[4721]: I0128 19:17:17.945083 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpw2n\" (UniqueName: \"kubernetes.io/projected/8dcae945-3742-46b5-b6ac-c8ff95e2946e-kube-api-access-rpw2n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6fthv\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:18 crc kubenswrapper[4721]: I0128 19:17:18.000744 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:17:18 crc kubenswrapper[4721]: I0128 19:17:18.536556 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv"] Jan 28 19:17:18 crc kubenswrapper[4721]: I0128 19:17:18.539474 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:17:18 crc kubenswrapper[4721]: I0128 19:17:18.591591 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" event={"ID":"8dcae945-3742-46b5-b6ac-c8ff95e2946e","Type":"ContainerStarted","Data":"fda93ff455f87bffbcc9a75dc6ac55fb665dce900b3f617cadd3eb238b743109"} Jan 28 19:17:19 crc kubenswrapper[4721]: I0128 19:17:19.606343 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" event={"ID":"8dcae945-3742-46b5-b6ac-c8ff95e2946e","Type":"ContainerStarted","Data":"965e4302afe63f1b046de7196a74b15d2c89d5564306bce05355185baf0163db"} Jan 28 19:17:19 crc kubenswrapper[4721]: I0128 19:17:19.630844 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" podStartSLOduration=2.153902785 podStartE2EDuration="2.630820803s" podCreationTimestamp="2026-01-28 19:17:17 +0000 UTC" firstStartedPulling="2026-01-28 19:17:18.538963523 +0000 UTC m=+2604.264269083" lastFinishedPulling="2026-01-28 19:17:19.015881541 +0000 UTC m=+2604.741187101" observedRunningTime="2026-01-28 19:17:19.626332844 +0000 UTC m=+2605.351638424" watchObservedRunningTime="2026-01-28 19:17:19.630820803 +0000 UTC m=+2605.356126363" Jan 28 19:18:01 crc kubenswrapper[4721]: I0128 19:18:01.224963 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:18:01 crc kubenswrapper[4721]: I0128 19:18:01.226377 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:18:31 crc kubenswrapper[4721]: I0128 19:18:31.224633 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:18:31 crc kubenswrapper[4721]: I0128 19:18:31.225161 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.225223 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.225802 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.225902 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.226976 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c08c37963e8ba05c6aa8626566a97d013c31084d8568750d6fc1a68a1adf0f7c"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.227034 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://c08c37963e8ba05c6aa8626566a97d013c31084d8568750d6fc1a68a1adf0f7c" gracePeriod=600 Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.625838 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="c08c37963e8ba05c6aa8626566a97d013c31084d8568750d6fc1a68a1adf0f7c" exitCode=0 Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.626534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"c08c37963e8ba05c6aa8626566a97d013c31084d8568750d6fc1a68a1adf0f7c"} Jan 28 19:19:01 crc kubenswrapper[4721]: I0128 19:19:01.626597 4721 scope.go:117] "RemoveContainer" containerID="4ac60e4a43c972329d52f34a6184c3179be27b5f07c30d84c2acd45b4f1b5d47" Jan 28 19:19:02 crc kubenswrapper[4721]: I0128 19:19:02.638948 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b"} Jan 28 19:19:27 crc kubenswrapper[4721]: I0128 19:19:27.898266 4721 generic.go:334] "Generic (PLEG): container finished" podID="8dcae945-3742-46b5-b6ac-c8ff95e2946e" containerID="965e4302afe63f1b046de7196a74b15d2c89d5564306bce05355185baf0163db" exitCode=0 Jan 28 19:19:27 crc kubenswrapper[4721]: I0128 19:19:27.898357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" event={"ID":"8dcae945-3742-46b5-b6ac-c8ff95e2946e","Type":"ContainerDied","Data":"965e4302afe63f1b046de7196a74b15d2c89d5564306bce05355185baf0163db"} Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.690674 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.789852 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-1\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.821560 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.891754 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-0\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.891812 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-1\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.891832 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-inventory\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.891860 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpw2n\" (UniqueName: \"kubernetes.io/projected/8dcae945-3742-46b5-b6ac-c8ff95e2946e-kube-api-access-rpw2n\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.892719 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-0\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.892855 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-ssh-key-openstack-edpm-ipam\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.892874 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-extra-config-0\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.892956 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-combined-ca-bundle\") pod \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\" (UID: \"8dcae945-3742-46b5-b6ac-c8ff95e2946e\") " Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.893793 4721 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.896072 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcae945-3742-46b5-b6ac-c8ff95e2946e-kube-api-access-rpw2n" (OuterVolumeSpecName: "kube-api-access-rpw2n") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "kube-api-access-rpw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.896593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.919650 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.922593 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.924666 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" event={"ID":"8dcae945-3742-46b5-b6ac-c8ff95e2946e","Type":"ContainerDied","Data":"fda93ff455f87bffbcc9a75dc6ac55fb665dce900b3f617cadd3eb238b743109"} Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.924721 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda93ff455f87bffbcc9a75dc6ac55fb665dce900b3f617cadd3eb238b743109" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.924760 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6fthv" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.926591 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.928285 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.929383 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-inventory" (OuterVolumeSpecName: "inventory") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.948024 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8dcae945-3742-46b5-b6ac-c8ff95e2946e" (UID: "8dcae945-3742-46b5-b6ac-c8ff95e2946e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995782 4721 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995818 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995830 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpw2n\" (UniqueName: \"kubernetes.io/projected/8dcae945-3742-46b5-b6ac-c8ff95e2946e-kube-api-access-rpw2n\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995843 4721 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995855 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995869 4721 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995877 4721 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:29.995886 4721 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8dcae945-3742-46b5-b6ac-c8ff95e2946e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.044016 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx"] Jan 28 19:19:30 crc kubenswrapper[4721]: E0128 19:19:30.045088 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcae945-3742-46b5-b6ac-c8ff95e2946e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.045113 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcae945-3742-46b5-b6ac-c8ff95e2946e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.045405 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcae945-3742-46b5-b6ac-c8ff95e2946e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.046628 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.050093 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.058945 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx"] Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201197 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201320 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201385 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201408 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201437 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72k6w\" (UniqueName: \"kubernetes.io/projected/1e117cf9-a997-4596-9334-0edb394b7fed-kube-api-access-72k6w\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201469 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.201537 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304243 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304340 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304368 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304397 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72k6w\" (UniqueName: \"kubernetes.io/projected/1e117cf9-a997-4596-9334-0edb394b7fed-kube-api-access-72k6w\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304432 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304523 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.304638 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.313111 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.313564 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.313737 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.314639 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.316570 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.322462 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.324055 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72k6w\" (UniqueName: \"kubernetes.io/projected/1e117cf9-a997-4596-9334-0edb394b7fed-kube-api-access-72k6w\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-28zzx\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:30 crc kubenswrapper[4721]: I0128 19:19:30.369459 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:19:31 crc kubenswrapper[4721]: I0128 19:19:31.318900 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx"] Jan 28 19:19:31 crc kubenswrapper[4721]: I0128 19:19:31.954104 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" event={"ID":"1e117cf9-a997-4596-9334-0edb394b7fed","Type":"ContainerStarted","Data":"fba0a09359499abae4ef9d94b36b3d5fb1d2f00502675a17909842a176a7824b"} Jan 28 19:19:32 crc kubenswrapper[4721]: I0128 19:19:32.966962 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" event={"ID":"1e117cf9-a997-4596-9334-0edb394b7fed","Type":"ContainerStarted","Data":"1d59f21d490dce251689ac00b25c6b8d12f46eda5927d3477e08cdf35544b871"} Jan 28 19:19:33 crc kubenswrapper[4721]: I0128 19:19:33.002450 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" podStartSLOduration=2.556761281 podStartE2EDuration="3.002416873s" podCreationTimestamp="2026-01-28 19:19:30 +0000 UTC" firstStartedPulling="2026-01-28 19:19:31.339358155 +0000 UTC m=+2737.064663715" lastFinishedPulling="2026-01-28 19:19:31.785013717 +0000 UTC m=+2737.510319307" observedRunningTime="2026-01-28 19:19:32.992365688 +0000 UTC m=+2738.717671268" watchObservedRunningTime="2026-01-28 19:19:33.002416873 +0000 UTC m=+2738.727722433" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.308503 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqcvw"] Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.312039 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.327769 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqcvw"] Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.400901 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwn8\" (UniqueName: \"kubernetes.io/projected/1b213987-d550-49d8-93dc-ec3692412725-kube-api-access-7rwn8\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.400985 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-utilities\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.401053 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-catalog-content\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.503735 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-utilities\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.503829 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-catalog-content\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.503987 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwn8\" (UniqueName: \"kubernetes.io/projected/1b213987-d550-49d8-93dc-ec3692412725-kube-api-access-7rwn8\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.504918 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-utilities\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.505089 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-catalog-content\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.525038 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwn8\" (UniqueName: \"kubernetes.io/projected/1b213987-d550-49d8-93dc-ec3692412725-kube-api-access-7rwn8\") pod \"redhat-operators-cqcvw\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:50 crc kubenswrapper[4721]: I0128 19:19:50.634770 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:19:51 crc kubenswrapper[4721]: W0128 19:19:51.148832 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b213987_d550_49d8_93dc_ec3692412725.slice/crio-db204b63b4b7036ba1d5485d1faafe5c89368704a924780a6d462a77e61b2de2 WatchSource:0}: Error finding container db204b63b4b7036ba1d5485d1faafe5c89368704a924780a6d462a77e61b2de2: Status 404 returned error can't find the container with id db204b63b4b7036ba1d5485d1faafe5c89368704a924780a6d462a77e61b2de2 Jan 28 19:19:51 crc kubenswrapper[4721]: I0128 19:19:51.160847 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqcvw"] Jan 28 19:19:51 crc kubenswrapper[4721]: I0128 19:19:51.166355 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerStarted","Data":"db204b63b4b7036ba1d5485d1faafe5c89368704a924780a6d462a77e61b2de2"} Jan 28 19:19:52 crc kubenswrapper[4721]: I0128 19:19:52.178102 4721 generic.go:334] "Generic (PLEG): container finished" podID="1b213987-d550-49d8-93dc-ec3692412725" containerID="64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de" exitCode=0 Jan 28 19:19:52 crc kubenswrapper[4721]: I0128 19:19:52.178218 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerDied","Data":"64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de"} Jan 28 19:19:53 crc kubenswrapper[4721]: I0128 19:19:53.189905 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerStarted","Data":"47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6"} Jan 28 19:19:58 crc kubenswrapper[4721]: I0128 19:19:58.252943 4721 generic.go:334] "Generic (PLEG): container finished" podID="1b213987-d550-49d8-93dc-ec3692412725" containerID="47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6" exitCode=0 Jan 28 19:19:58 crc kubenswrapper[4721]: I0128 19:19:58.253035 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerDied","Data":"47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6"} Jan 28 19:19:59 crc kubenswrapper[4721]: I0128 19:19:59.268780 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerStarted","Data":"4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e"} Jan 28 19:19:59 crc kubenswrapper[4721]: I0128 19:19:59.318211 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cqcvw" podStartSLOduration=2.869204554 podStartE2EDuration="9.318164905s" podCreationTimestamp="2026-01-28 19:19:50 +0000 UTC" firstStartedPulling="2026-01-28 19:19:52.180308804 +0000 UTC m=+2757.905614364" lastFinishedPulling="2026-01-28 19:19:58.629269135 +0000 UTC m=+2764.354574715" observedRunningTime="2026-01-28 19:19:59.315327416 +0000 UTC m=+2765.040632976" watchObservedRunningTime="2026-01-28 19:19:59.318164905 +0000 UTC m=+2765.043470465" Jan 28 19:20:00 crc kubenswrapper[4721]: I0128 19:20:00.635537 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:20:00 crc kubenswrapper[4721]: I0128 19:20:00.636469 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:20:01 crc kubenswrapper[4721]: I0128 19:20:01.682571 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cqcvw" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="registry-server" probeResult="failure" output=< Jan 28 19:20:01 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:20:01 crc kubenswrapper[4721]: > Jan 28 19:20:10 crc kubenswrapper[4721]: I0128 19:20:10.690635 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:20:10 crc kubenswrapper[4721]: I0128 19:20:10.752044 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:20:10 crc kubenswrapper[4721]: I0128 19:20:10.934179 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqcvw"] Jan 28 19:20:12 crc kubenswrapper[4721]: I0128 19:20:12.409400 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cqcvw" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="registry-server" containerID="cri-o://4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e" gracePeriod=2 Jan 28 19:20:12 crc kubenswrapper[4721]: I0128 19:20:12.999388 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.091355 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-catalog-content\") pod \"1b213987-d550-49d8-93dc-ec3692412725\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.091767 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-utilities\") pod \"1b213987-d550-49d8-93dc-ec3692412725\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.091923 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwn8\" (UniqueName: \"kubernetes.io/projected/1b213987-d550-49d8-93dc-ec3692412725-kube-api-access-7rwn8\") pod \"1b213987-d550-49d8-93dc-ec3692412725\" (UID: \"1b213987-d550-49d8-93dc-ec3692412725\") " Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.092860 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-utilities" (OuterVolumeSpecName: "utilities") pod "1b213987-d550-49d8-93dc-ec3692412725" (UID: "1b213987-d550-49d8-93dc-ec3692412725"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.098863 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b213987-d550-49d8-93dc-ec3692412725-kube-api-access-7rwn8" (OuterVolumeSpecName: "kube-api-access-7rwn8") pod "1b213987-d550-49d8-93dc-ec3692412725" (UID: "1b213987-d550-49d8-93dc-ec3692412725"). InnerVolumeSpecName "kube-api-access-7rwn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.195578 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.195835 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwn8\" (UniqueName: \"kubernetes.io/projected/1b213987-d550-49d8-93dc-ec3692412725-kube-api-access-7rwn8\") on node \"crc\" DevicePath \"\"" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.253979 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b213987-d550-49d8-93dc-ec3692412725" (UID: "1b213987-d550-49d8-93dc-ec3692412725"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.297918 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b213987-d550-49d8-93dc-ec3692412725-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.420503 4721 generic.go:334] "Generic (PLEG): container finished" podID="1b213987-d550-49d8-93dc-ec3692412725" containerID="4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e" exitCode=0 Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.420554 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerDied","Data":"4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e"} Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.420594 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqcvw" event={"ID":"1b213987-d550-49d8-93dc-ec3692412725","Type":"ContainerDied","Data":"db204b63b4b7036ba1d5485d1faafe5c89368704a924780a6d462a77e61b2de2"} Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.420619 4721 scope.go:117] "RemoveContainer" containerID="4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.420664 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqcvw" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.447577 4721 scope.go:117] "RemoveContainer" containerID="47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.474006 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqcvw"] Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.474129 4721 scope.go:117] "RemoveContainer" containerID="64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.490415 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqcvw"] Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.529107 4721 scope.go:117] "RemoveContainer" containerID="4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e" Jan 28 19:20:13 crc kubenswrapper[4721]: E0128 19:20:13.531837 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e\": container with ID starting with 4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e not found: ID does not exist" containerID="4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.531902 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e"} err="failed to get container status \"4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e\": rpc error: code = NotFound desc = could not find container \"4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e\": container with ID starting with 4e8f53456276458c598bbfbc0b6d60a9b1fee5f6ab40f30a1515b6c2066f723e not found: ID does not exist" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.531931 4721 scope.go:117] "RemoveContainer" containerID="47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6" Jan 28 19:20:13 crc kubenswrapper[4721]: E0128 19:20:13.532290 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6\": container with ID starting with 47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6 not found: ID does not exist" containerID="47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.532352 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6"} err="failed to get container status \"47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6\": rpc error: code = NotFound desc = could not find container \"47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6\": container with ID starting with 47c3a072a66f2b1aee5c1857fc30fa511e4b41d0365528fdead4ffa6a04557c6 not found: ID does not exist" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.532393 4721 scope.go:117] "RemoveContainer" containerID="64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de" Jan 28 19:20:13 crc kubenswrapper[4721]: E0128 19:20:13.532715 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de\": container with ID starting with 64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de not found: ID does not exist" containerID="64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.532783 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de"} err="failed to get container status \"64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de\": rpc error: code = NotFound desc = could not find container \"64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de\": container with ID starting with 64c4e8ba97b724d4cb4494c573f2e7b853e5f2c1e8f955ada0a5ec3bd287a1de not found: ID does not exist" Jan 28 19:20:13 crc kubenswrapper[4721]: I0128 19:20:13.545396 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b213987-d550-49d8-93dc-ec3692412725" path="/var/lib/kubelet/pods/1b213987-d550-49d8-93dc-ec3692412725/volumes" Jan 28 19:21:01 crc kubenswrapper[4721]: I0128 19:21:01.224807 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:21:01 crc kubenswrapper[4721]: I0128 19:21:01.225368 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:21:31 crc kubenswrapper[4721]: I0128 19:21:31.225002 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:21:31 crc kubenswrapper[4721]: I0128 19:21:31.225811 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.225377 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.225934 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.226013 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.227320 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.227396 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" gracePeriod=600 Jan 28 19:22:01 crc kubenswrapper[4721]: E0128 19:22:01.348440 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.551338 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" exitCode=0 Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.551534 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b"} Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.551710 4721 scope.go:117] "RemoveContainer" containerID="c08c37963e8ba05c6aa8626566a97d013c31084d8568750d6fc1a68a1adf0f7c" Jan 28 19:22:01 crc kubenswrapper[4721]: I0128 19:22:01.552728 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:22:01 crc kubenswrapper[4721]: E0128 19:22:01.553015 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.733424 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfz47"] Jan 28 19:22:07 crc kubenswrapper[4721]: E0128 19:22:07.734779 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="extract-content" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.734797 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="extract-content" Jan 28 19:22:07 crc kubenswrapper[4721]: E0128 19:22:07.734831 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="registry-server" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.734837 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="registry-server" Jan 28 19:22:07 crc kubenswrapper[4721]: E0128 19:22:07.734854 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="extract-utilities" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.734859 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="extract-utilities" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.735079 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b213987-d550-49d8-93dc-ec3692412725" containerName="registry-server" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.736883 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.750808 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfz47"] Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.825013 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-catalog-content\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.825246 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-utilities\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.825306 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbh8j\" (UniqueName: \"kubernetes.io/projected/38bc68f1-576f-41bf-8934-844692057244-kube-api-access-lbh8j\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.928162 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-utilities\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.928272 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbh8j\" (UniqueName: \"kubernetes.io/projected/38bc68f1-576f-41bf-8934-844692057244-kube-api-access-lbh8j\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.928370 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-catalog-content\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.928657 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-utilities\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.929235 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-catalog-content\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:07 crc kubenswrapper[4721]: I0128 19:22:07.953477 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbh8j\" (UniqueName: \"kubernetes.io/projected/38bc68f1-576f-41bf-8934-844692057244-kube-api-access-lbh8j\") pod \"redhat-marketplace-wfz47\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:08 crc kubenswrapper[4721]: I0128 19:22:08.065130 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:08 crc kubenswrapper[4721]: I0128 19:22:08.578450 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfz47"] Jan 28 19:22:08 crc kubenswrapper[4721]: I0128 19:22:08.623650 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerStarted","Data":"59e78ff1f3d9cc32e6904512bd0590b4dee0682283a594f5f51e00e2048023c7"} Jan 28 19:22:09 crc kubenswrapper[4721]: I0128 19:22:09.637840 4721 generic.go:334] "Generic (PLEG): container finished" podID="38bc68f1-576f-41bf-8934-844692057244" containerID="78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff" exitCode=0 Jan 28 19:22:09 crc kubenswrapper[4721]: I0128 19:22:09.639053 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerDied","Data":"78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff"} Jan 28 19:22:10 crc kubenswrapper[4721]: I0128 19:22:10.653149 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerStarted","Data":"1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059"} Jan 28 19:22:11 crc kubenswrapper[4721]: I0128 19:22:11.665074 4721 generic.go:334] "Generic (PLEG): container finished" podID="38bc68f1-576f-41bf-8934-844692057244" containerID="1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059" exitCode=0 Jan 28 19:22:11 crc kubenswrapper[4721]: I0128 19:22:11.665247 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerDied","Data":"1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059"} Jan 28 19:22:12 crc kubenswrapper[4721]: I0128 19:22:12.676015 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerStarted","Data":"c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0"} Jan 28 19:22:12 crc kubenswrapper[4721]: I0128 19:22:12.694807 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfz47" podStartSLOduration=2.940982867 podStartE2EDuration="5.694780584s" podCreationTimestamp="2026-01-28 19:22:07 +0000 UTC" firstStartedPulling="2026-01-28 19:22:09.640648161 +0000 UTC m=+2895.365953721" lastFinishedPulling="2026-01-28 19:22:12.394445878 +0000 UTC m=+2898.119751438" observedRunningTime="2026-01-28 19:22:12.693552666 +0000 UTC m=+2898.418858236" watchObservedRunningTime="2026-01-28 19:22:12.694780584 +0000 UTC m=+2898.420086144" Jan 28 19:22:15 crc kubenswrapper[4721]: I0128 19:22:15.538940 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:22:15 crc kubenswrapper[4721]: E0128 19:22:15.540524 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:22:18 crc kubenswrapper[4721]: I0128 19:22:18.066697 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:18 crc kubenswrapper[4721]: I0128 19:22:18.067062 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:18 crc kubenswrapper[4721]: I0128 19:22:18.126874 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:18 crc kubenswrapper[4721]: I0128 19:22:18.811876 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:18 crc kubenswrapper[4721]: I0128 19:22:18.858785 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfz47"] Jan 28 19:22:20 crc kubenswrapper[4721]: I0128 19:22:20.781366 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfz47" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="registry-server" containerID="cri-o://c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0" gracePeriod=2 Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.606412 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.664469 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbh8j\" (UniqueName: \"kubernetes.io/projected/38bc68f1-576f-41bf-8934-844692057244-kube-api-access-lbh8j\") pod \"38bc68f1-576f-41bf-8934-844692057244\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.665381 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-catalog-content\") pod \"38bc68f1-576f-41bf-8934-844692057244\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.665728 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-utilities\") pod \"38bc68f1-576f-41bf-8934-844692057244\" (UID: \"38bc68f1-576f-41bf-8934-844692057244\") " Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.666571 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-utilities" (OuterVolumeSpecName: "utilities") pod "38bc68f1-576f-41bf-8934-844692057244" (UID: "38bc68f1-576f-41bf-8934-844692057244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.674155 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38bc68f1-576f-41bf-8934-844692057244-kube-api-access-lbh8j" (OuterVolumeSpecName: "kube-api-access-lbh8j") pod "38bc68f1-576f-41bf-8934-844692057244" (UID: "38bc68f1-576f-41bf-8934-844692057244"). InnerVolumeSpecName "kube-api-access-lbh8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.693808 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38bc68f1-576f-41bf-8934-844692057244" (UID: "38bc68f1-576f-41bf-8934-844692057244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.769137 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.769196 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbh8j\" (UniqueName: \"kubernetes.io/projected/38bc68f1-576f-41bf-8934-844692057244-kube-api-access-lbh8j\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.769214 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38bc68f1-576f-41bf-8934-844692057244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.801945 4721 generic.go:334] "Generic (PLEG): container finished" podID="38bc68f1-576f-41bf-8934-844692057244" containerID="c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0" exitCode=0 Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.802000 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerDied","Data":"c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0"} Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.802033 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfz47" event={"ID":"38bc68f1-576f-41bf-8934-844692057244","Type":"ContainerDied","Data":"59e78ff1f3d9cc32e6904512bd0590b4dee0682283a594f5f51e00e2048023c7"} Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.802058 4721 scope.go:117] "RemoveContainer" containerID="c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.802280 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfz47" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.846933 4721 scope.go:117] "RemoveContainer" containerID="1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.858426 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfz47"] Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.869011 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfz47"] Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.874785 4721 scope.go:117] "RemoveContainer" containerID="78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.938376 4721 scope.go:117] "RemoveContainer" containerID="c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0" Jan 28 19:22:21 crc kubenswrapper[4721]: E0128 19:22:21.939131 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0\": container with ID starting with c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0 not found: ID does not exist" containerID="c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.939309 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0"} err="failed to get container status \"c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0\": rpc error: code = NotFound desc = could not find container \"c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0\": container with ID starting with c9bf8bbc8bc6cefb68dd2c59483e29b738f7e9255025a6d72f454c097a58c0c0 not found: ID does not exist" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.939349 4721 scope.go:117] "RemoveContainer" containerID="1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059" Jan 28 19:22:21 crc kubenswrapper[4721]: E0128 19:22:21.939833 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059\": container with ID starting with 1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059 not found: ID does not exist" containerID="1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.939885 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059"} err="failed to get container status \"1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059\": rpc error: code = NotFound desc = could not find container \"1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059\": container with ID starting with 1028b4b34ccb50ff6be634613b945471fd3b0435c9b7b7133ec4865f3c002059 not found: ID does not exist" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.939904 4721 scope.go:117] "RemoveContainer" containerID="78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff" Jan 28 19:22:21 crc kubenswrapper[4721]: E0128 19:22:21.940297 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff\": container with ID starting with 78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff not found: ID does not exist" containerID="78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff" Jan 28 19:22:21 crc kubenswrapper[4721]: I0128 19:22:21.940333 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff"} err="failed to get container status \"78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff\": rpc error: code = NotFound desc = could not find container \"78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff\": container with ID starting with 78a369cde1019daff158c0f99edbe364130473126c79d1fb51ba607450bfb8ff not found: ID does not exist" Jan 28 19:22:23 crc kubenswrapper[4721]: I0128 19:22:23.543582 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38bc68f1-576f-41bf-8934-844692057244" path="/var/lib/kubelet/pods/38bc68f1-576f-41bf-8934-844692057244/volumes" Jan 28 19:22:25 crc kubenswrapper[4721]: I0128 19:22:25.845039 4721 generic.go:334] "Generic (PLEG): container finished" podID="1e117cf9-a997-4596-9334-0edb394b7fed" containerID="1d59f21d490dce251689ac00b25c6b8d12f46eda5927d3477e08cdf35544b871" exitCode=0 Jan 28 19:22:25 crc kubenswrapper[4721]: I0128 19:22:25.845150 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" event={"ID":"1e117cf9-a997-4596-9334-0edb394b7fed","Type":"ContainerDied","Data":"1d59f21d490dce251689ac00b25c6b8d12f46eda5927d3477e08cdf35544b871"} Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.395358 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414568 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-inventory\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414618 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-1\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414697 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ssh-key-openstack-edpm-ipam\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414736 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72k6w\" (UniqueName: \"kubernetes.io/projected/1e117cf9-a997-4596-9334-0edb394b7fed-kube-api-access-72k6w\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414763 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-telemetry-combined-ca-bundle\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414920 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-0\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.414967 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-2\") pod \"1e117cf9-a997-4596-9334-0edb394b7fed\" (UID: \"1e117cf9-a997-4596-9334-0edb394b7fed\") " Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.420927 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e117cf9-a997-4596-9334-0edb394b7fed-kube-api-access-72k6w" (OuterVolumeSpecName: "kube-api-access-72k6w") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "kube-api-access-72k6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.421731 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.449534 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.451074 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.454020 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-inventory" (OuterVolumeSpecName: "inventory") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.467786 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.471476 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1e117cf9-a997-4596-9334-0edb394b7fed" (UID: "1e117cf9-a997-4596-9334-0edb394b7fed"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516554 4721 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516582 4721 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516594 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516606 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72k6w\" (UniqueName: \"kubernetes.io/projected/1e117cf9-a997-4596-9334-0edb394b7fed-kube-api-access-72k6w\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516615 4721 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516623 4721 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.516632 4721 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e117cf9-a997-4596-9334-0edb394b7fed-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.865406 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" event={"ID":"1e117cf9-a997-4596-9334-0edb394b7fed","Type":"ContainerDied","Data":"fba0a09359499abae4ef9d94b36b3d5fb1d2f00502675a17909842a176a7824b"} Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.865465 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba0a09359499abae4ef9d94b36b3d5fb1d2f00502675a17909842a176a7824b" Jan 28 19:22:27 crc kubenswrapper[4721]: I0128 19:22:27.865507 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-28zzx" Jan 28 19:22:30 crc kubenswrapper[4721]: I0128 19:22:30.528847 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:22:30 crc kubenswrapper[4721]: E0128 19:22:30.529463 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:22:43 crc kubenswrapper[4721]: I0128 19:22:43.528859 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:22:43 crc kubenswrapper[4721]: E0128 19:22:43.529917 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:22:57 crc kubenswrapper[4721]: I0128 19:22:57.529525 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:22:57 crc kubenswrapper[4721]: E0128 19:22:57.530433 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:23:08 crc kubenswrapper[4721]: I0128 19:23:08.528856 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:23:08 crc kubenswrapper[4721]: E0128 19:23:08.529845 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.855134 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 19:23:20 crc kubenswrapper[4721]: E0128 19:23:20.856373 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e117cf9-a997-4596-9334-0edb394b7fed" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.856397 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e117cf9-a997-4596-9334-0edb394b7fed" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:23:20 crc kubenswrapper[4721]: E0128 19:23:20.856417 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="extract-utilities" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.856424 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="extract-utilities" Jan 28 19:23:20 crc kubenswrapper[4721]: E0128 19:23:20.856457 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="extract-content" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.856465 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="extract-content" Jan 28 19:23:20 crc kubenswrapper[4721]: E0128 19:23:20.856486 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="registry-server" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.856493 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="registry-server" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.856765 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="38bc68f1-576f-41bf-8934-844692057244" containerName="registry-server" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.856819 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e117cf9-a997-4596-9334-0edb394b7fed" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.857955 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.863110 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.863133 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.863133 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2pmm2" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.863134 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.873167 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.938432 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crn8k\" (UniqueName: \"kubernetes.io/projected/5e586424-d1f9-4f72-9dc8-f046e2f235f5-kube-api-access-crn8k\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.938548 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.938593 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.938613 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.938886 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.938952 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.939055 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.939127 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:20 crc kubenswrapper[4721]: I0128 19:23:20.939387 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042006 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042062 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042197 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042229 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042283 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042310 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042337 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042448 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crn8k\" (UniqueName: \"kubernetes.io/projected/5e586424-d1f9-4f72-9dc8-f046e2f235f5-kube-api-access-crn8k\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042519 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042640 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.042826 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.043028 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.043921 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.044130 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.050786 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.050933 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.061112 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.061437 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crn8k\" (UniqueName: \"kubernetes.io/projected/5e586424-d1f9-4f72-9dc8-f046e2f235f5-kube-api-access-crn8k\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.080369 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.178943 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.675704 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 19:23:21 crc kubenswrapper[4721]: I0128 19:23:21.680402 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:23:22 crc kubenswrapper[4721]: I0128 19:23:22.416606 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e586424-d1f9-4f72-9dc8-f046e2f235f5","Type":"ContainerStarted","Data":"dcad7dc726a17c8f33b0c2e099d3bef9d1203f0a9acc5cec6147e75237356a93"} Jan 28 19:23:23 crc kubenswrapper[4721]: I0128 19:23:23.528402 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:23:23 crc kubenswrapper[4721]: E0128 19:23:23.528944 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:23:36 crc kubenswrapper[4721]: I0128 19:23:36.530079 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:23:36 crc kubenswrapper[4721]: E0128 19:23:36.531022 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:23:48 crc kubenswrapper[4721]: I0128 19:23:48.530228 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:23:48 crc kubenswrapper[4721]: E0128 19:23:48.531101 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:23:54 crc kubenswrapper[4721]: E0128 19:23:54.065729 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 28 19:23:54 crc kubenswrapper[4721]: E0128 19:23:54.066546 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crn8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5e586424-d1f9-4f72-9dc8-f046e2f235f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 19:23:54 crc kubenswrapper[4721]: E0128 19:23:54.067737 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5e586424-d1f9-4f72-9dc8-f046e2f235f5" Jan 28 19:23:54 crc kubenswrapper[4721]: E0128 19:23:54.812622 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5e586424-d1f9-4f72-9dc8-f046e2f235f5" Jan 28 19:24:03 crc kubenswrapper[4721]: I0128 19:24:03.530256 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:24:03 crc kubenswrapper[4721]: E0128 19:24:03.531455 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:24:07 crc kubenswrapper[4721]: I0128 19:24:07.982052 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 19:24:09 crc kubenswrapper[4721]: I0128 19:24:09.973880 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e586424-d1f9-4f72-9dc8-f046e2f235f5","Type":"ContainerStarted","Data":"d865a9f8155c7c7b985db7878bfbd8f567cd20d8d6f437632b9434c0c74dd8c7"} Jan 28 19:24:10 crc kubenswrapper[4721]: I0128 19:24:10.005558 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.706361408 podStartE2EDuration="51.00553076s" podCreationTimestamp="2026-01-28 19:23:19 +0000 UTC" firstStartedPulling="2026-01-28 19:23:21.679828406 +0000 UTC m=+2967.405133966" lastFinishedPulling="2026-01-28 19:24:07.978997748 +0000 UTC m=+3013.704303318" observedRunningTime="2026-01-28 19:24:09.994962307 +0000 UTC m=+3015.720267867" watchObservedRunningTime="2026-01-28 19:24:10.00553076 +0000 UTC m=+3015.730836320" Jan 28 19:24:14 crc kubenswrapper[4721]: I0128 19:24:14.529640 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:24:14 crc kubenswrapper[4721]: E0128 19:24:14.530510 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:24:26 crc kubenswrapper[4721]: I0128 19:24:26.529967 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:24:26 crc kubenswrapper[4721]: E0128 19:24:26.530988 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:24:41 crc kubenswrapper[4721]: I0128 19:24:41.529091 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:24:41 crc kubenswrapper[4721]: E0128 19:24:41.530007 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:24:55 crc kubenswrapper[4721]: I0128 19:24:55.610782 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:24:55 crc kubenswrapper[4721]: E0128 19:24:55.611971 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:25:07 crc kubenswrapper[4721]: I0128 19:25:07.529328 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:25:07 crc kubenswrapper[4721]: E0128 19:25:07.530114 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:25:21 crc kubenswrapper[4721]: I0128 19:25:21.529404 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:25:21 crc kubenswrapper[4721]: E0128 19:25:21.530132 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:25:33 crc kubenswrapper[4721]: I0128 19:25:33.529696 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:25:33 crc kubenswrapper[4721]: E0128 19:25:33.530639 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:25:45 crc kubenswrapper[4721]: I0128 19:25:45.542304 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:25:45 crc kubenswrapper[4721]: E0128 19:25:45.543102 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:26:00 crc kubenswrapper[4721]: I0128 19:26:00.528922 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:26:00 crc kubenswrapper[4721]: E0128 19:26:00.529829 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:26:11 crc kubenswrapper[4721]: I0128 19:26:11.528848 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:26:11 crc kubenswrapper[4721]: E0128 19:26:11.529888 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.785206 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wf254"] Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.788331 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.806870 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wf254"] Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.851037 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmzl\" (UniqueName: \"kubernetes.io/projected/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-kube-api-access-dhmzl\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.851105 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-catalog-content\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.851166 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-utilities\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.953440 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmzl\" (UniqueName: \"kubernetes.io/projected/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-kube-api-access-dhmzl\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.953489 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-catalog-content\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.953563 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-utilities\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.954341 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-utilities\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.954336 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-catalog-content\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:22 crc kubenswrapper[4721]: I0128 19:26:22.978156 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmzl\" (UniqueName: \"kubernetes.io/projected/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-kube-api-access-dhmzl\") pod \"community-operators-wf254\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:23 crc kubenswrapper[4721]: I0128 19:26:23.108281 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:23 crc kubenswrapper[4721]: I0128 19:26:23.862247 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wf254"] Jan 28 19:26:24 crc kubenswrapper[4721]: I0128 19:26:24.576016 4721 generic.go:334] "Generic (PLEG): container finished" podID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerID="e775eb1f4b568c207e73c36e8cce935a8024974fd5a379f7a302844103eb9f51" exitCode=0 Jan 28 19:26:24 crc kubenswrapper[4721]: I0128 19:26:24.576069 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerDied","Data":"e775eb1f4b568c207e73c36e8cce935a8024974fd5a379f7a302844103eb9f51"} Jan 28 19:26:24 crc kubenswrapper[4721]: I0128 19:26:24.576454 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerStarted","Data":"d614ccfb42eabed09e8aa4a199579a0ed6afe1a3d4a380bacfd1461dd669ee3a"} Jan 28 19:26:25 crc kubenswrapper[4721]: I0128 19:26:25.592292 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerStarted","Data":"619dbf0d5cff0142912d9de21cd762ffb563d3efc2da35faabf51affe1739ecd"} Jan 28 19:26:26 crc kubenswrapper[4721]: I0128 19:26:26.530080 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:26:26 crc kubenswrapper[4721]: E0128 19:26:26.530425 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:26:27 crc kubenswrapper[4721]: I0128 19:26:27.617452 4721 generic.go:334] "Generic (PLEG): container finished" podID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerID="619dbf0d5cff0142912d9de21cd762ffb563d3efc2da35faabf51affe1739ecd" exitCode=0 Jan 28 19:26:27 crc kubenswrapper[4721]: I0128 19:26:27.617661 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerDied","Data":"619dbf0d5cff0142912d9de21cd762ffb563d3efc2da35faabf51affe1739ecd"} Jan 28 19:26:28 crc kubenswrapper[4721]: I0128 19:26:28.631773 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerStarted","Data":"d0343257e28a42da6691165a16c187b26974dabfb1cb0f294f72151b7e8e92ca"} Jan 28 19:26:28 crc kubenswrapper[4721]: I0128 19:26:28.657612 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wf254" podStartSLOduration=3.216895737 podStartE2EDuration="6.657586409s" podCreationTimestamp="2026-01-28 19:26:22 +0000 UTC" firstStartedPulling="2026-01-28 19:26:24.578942113 +0000 UTC m=+3150.304247673" lastFinishedPulling="2026-01-28 19:26:28.019632785 +0000 UTC m=+3153.744938345" observedRunningTime="2026-01-28 19:26:28.655603307 +0000 UTC m=+3154.380908887" watchObservedRunningTime="2026-01-28 19:26:28.657586409 +0000 UTC m=+3154.382891969" Jan 28 19:26:33 crc kubenswrapper[4721]: I0128 19:26:33.109409 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:33 crc kubenswrapper[4721]: I0128 19:26:33.110144 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:33 crc kubenswrapper[4721]: I0128 19:26:33.164570 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:33 crc kubenswrapper[4721]: I0128 19:26:33.755710 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:33 crc kubenswrapper[4721]: I0128 19:26:33.813144 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wf254"] Jan 28 19:26:35 crc kubenswrapper[4721]: I0128 19:26:35.721780 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wf254" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="registry-server" containerID="cri-o://d0343257e28a42da6691165a16c187b26974dabfb1cb0f294f72151b7e8e92ca" gracePeriod=2 Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.749199 4721 generic.go:334] "Generic (PLEG): container finished" podID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerID="d0343257e28a42da6691165a16c187b26974dabfb1cb0f294f72151b7e8e92ca" exitCode=0 Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.749455 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerDied","Data":"d0343257e28a42da6691165a16c187b26974dabfb1cb0f294f72151b7e8e92ca"} Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.750706 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf254" event={"ID":"d067551d-1d82-4f2f-8263-6d9b75f6cf4f","Type":"ContainerDied","Data":"d614ccfb42eabed09e8aa4a199579a0ed6afe1a3d4a380bacfd1461dd669ee3a"} Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.750812 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d614ccfb42eabed09e8aa4a199579a0ed6afe1a3d4a380bacfd1461dd669ee3a" Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.826101 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.914651 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-catalog-content\") pod \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.914834 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhmzl\" (UniqueName: \"kubernetes.io/projected/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-kube-api-access-dhmzl\") pod \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.914964 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-utilities\") pod \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\" (UID: \"d067551d-1d82-4f2f-8263-6d9b75f6cf4f\") " Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.917263 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-utilities" (OuterVolumeSpecName: "utilities") pod "d067551d-1d82-4f2f-8263-6d9b75f6cf4f" (UID: "d067551d-1d82-4f2f-8263-6d9b75f6cf4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.930606 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-kube-api-access-dhmzl" (OuterVolumeSpecName: "kube-api-access-dhmzl") pod "d067551d-1d82-4f2f-8263-6d9b75f6cf4f" (UID: "d067551d-1d82-4f2f-8263-6d9b75f6cf4f"). InnerVolumeSpecName "kube-api-access-dhmzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:26:36 crc kubenswrapper[4721]: I0128 19:26:36.986160 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d067551d-1d82-4f2f-8263-6d9b75f6cf4f" (UID: "d067551d-1d82-4f2f-8263-6d9b75f6cf4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.017915 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.018214 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhmzl\" (UniqueName: \"kubernetes.io/projected/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-kube-api-access-dhmzl\") on node \"crc\" DevicePath \"\"" Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.018313 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d067551d-1d82-4f2f-8263-6d9b75f6cf4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.535778 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:26:37 crc kubenswrapper[4721]: E0128 19:26:37.536416 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.760327 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf254" Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.791345 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wf254"] Jan 28 19:26:37 crc kubenswrapper[4721]: I0128 19:26:37.805749 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wf254"] Jan 28 19:26:39 crc kubenswrapper[4721]: I0128 19:26:39.544292 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" path="/var/lib/kubelet/pods/d067551d-1d82-4f2f-8263-6d9b75f6cf4f/volumes" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.507390 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gskdg"] Jan 28 19:26:42 crc kubenswrapper[4721]: E0128 19:26:42.508534 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="extract-content" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.508551 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="extract-content" Jan 28 19:26:42 crc kubenswrapper[4721]: E0128 19:26:42.508572 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="registry-server" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.508580 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="registry-server" Jan 28 19:26:42 crc kubenswrapper[4721]: E0128 19:26:42.508604 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="extract-utilities" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.508612 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="extract-utilities" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.508927 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d067551d-1d82-4f2f-8263-6d9b75f6cf4f" containerName="registry-server" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.511058 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.529983 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gskdg"] Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.553900 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-catalog-content\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.553978 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-utilities\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.554041 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qvc\" (UniqueName: \"kubernetes.io/projected/55192242-ba42-479d-ab93-d44913f58182-kube-api-access-z5qvc\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.656474 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qvc\" (UniqueName: \"kubernetes.io/projected/55192242-ba42-479d-ab93-d44913f58182-kube-api-access-z5qvc\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.656806 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-catalog-content\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.656844 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-utilities\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.657347 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-utilities\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.657356 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-catalog-content\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.683258 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qvc\" (UniqueName: \"kubernetes.io/projected/55192242-ba42-479d-ab93-d44913f58182-kube-api-access-z5qvc\") pod \"certified-operators-gskdg\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:42 crc kubenswrapper[4721]: I0128 19:26:42.845764 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:43 crc kubenswrapper[4721]: I0128 19:26:43.501979 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gskdg"] Jan 28 19:26:43 crc kubenswrapper[4721]: I0128 19:26:43.827093 4721 generic.go:334] "Generic (PLEG): container finished" podID="55192242-ba42-479d-ab93-d44913f58182" containerID="46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159" exitCode=0 Jan 28 19:26:43 crc kubenswrapper[4721]: I0128 19:26:43.827159 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerDied","Data":"46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159"} Jan 28 19:26:43 crc kubenswrapper[4721]: I0128 19:26:43.827506 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerStarted","Data":"8dfa21fcbf79b1a5a66926646a24b07629be82150ebd358243267db7f16b99ec"} Jan 28 19:26:44 crc kubenswrapper[4721]: I0128 19:26:44.839645 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerStarted","Data":"b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d"} Jan 28 19:26:46 crc kubenswrapper[4721]: I0128 19:26:46.862432 4721 generic.go:334] "Generic (PLEG): container finished" podID="55192242-ba42-479d-ab93-d44913f58182" containerID="b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d" exitCode=0 Jan 28 19:26:46 crc kubenswrapper[4721]: I0128 19:26:46.862536 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerDied","Data":"b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d"} Jan 28 19:26:47 crc kubenswrapper[4721]: I0128 19:26:47.875324 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerStarted","Data":"b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7"} Jan 28 19:26:47 crc kubenswrapper[4721]: I0128 19:26:47.899045 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gskdg" podStartSLOduration=2.445197156 podStartE2EDuration="5.899021081s" podCreationTimestamp="2026-01-28 19:26:42 +0000 UTC" firstStartedPulling="2026-01-28 19:26:43.829919156 +0000 UTC m=+3169.555224716" lastFinishedPulling="2026-01-28 19:26:47.283743071 +0000 UTC m=+3173.009048641" observedRunningTime="2026-01-28 19:26:47.897413061 +0000 UTC m=+3173.622718641" watchObservedRunningTime="2026-01-28 19:26:47.899021081 +0000 UTC m=+3173.624326641" Jan 28 19:26:51 crc kubenswrapper[4721]: I0128 19:26:51.531405 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:26:51 crc kubenswrapper[4721]: E0128 19:26:51.532456 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:26:52 crc kubenswrapper[4721]: I0128 19:26:52.846904 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:52 crc kubenswrapper[4721]: I0128 19:26:52.846975 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:52 crc kubenswrapper[4721]: I0128 19:26:52.906161 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:52 crc kubenswrapper[4721]: I0128 19:26:52.983904 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:54 crc kubenswrapper[4721]: I0128 19:26:54.098155 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gskdg"] Jan 28 19:26:54 crc kubenswrapper[4721]: I0128 19:26:54.948348 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gskdg" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="registry-server" containerID="cri-o://b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7" gracePeriod=2 Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.690458 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.829258 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-utilities\") pod \"55192242-ba42-479d-ab93-d44913f58182\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.829835 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-catalog-content\") pod \"55192242-ba42-479d-ab93-d44913f58182\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.830003 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qvc\" (UniqueName: \"kubernetes.io/projected/55192242-ba42-479d-ab93-d44913f58182-kube-api-access-z5qvc\") pod \"55192242-ba42-479d-ab93-d44913f58182\" (UID: \"55192242-ba42-479d-ab93-d44913f58182\") " Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.830039 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-utilities" (OuterVolumeSpecName: "utilities") pod "55192242-ba42-479d-ab93-d44913f58182" (UID: "55192242-ba42-479d-ab93-d44913f58182"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.831433 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.840519 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55192242-ba42-479d-ab93-d44913f58182-kube-api-access-z5qvc" (OuterVolumeSpecName: "kube-api-access-z5qvc") pod "55192242-ba42-479d-ab93-d44913f58182" (UID: "55192242-ba42-479d-ab93-d44913f58182"). InnerVolumeSpecName "kube-api-access-z5qvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.887572 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55192242-ba42-479d-ab93-d44913f58182" (UID: "55192242-ba42-479d-ab93-d44913f58182"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.934815 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55192242-ba42-479d-ab93-d44913f58182-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.934890 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qvc\" (UniqueName: \"kubernetes.io/projected/55192242-ba42-479d-ab93-d44913f58182-kube-api-access-z5qvc\") on node \"crc\" DevicePath \"\"" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.960978 4721 generic.go:334] "Generic (PLEG): container finished" podID="55192242-ba42-479d-ab93-d44913f58182" containerID="b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7" exitCode=0 Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.961026 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerDied","Data":"b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7"} Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.961059 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gskdg" event={"ID":"55192242-ba42-479d-ab93-d44913f58182","Type":"ContainerDied","Data":"8dfa21fcbf79b1a5a66926646a24b07629be82150ebd358243267db7f16b99ec"} Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.961081 4721 scope.go:117] "RemoveContainer" containerID="b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7" Jan 28 19:26:55 crc kubenswrapper[4721]: I0128 19:26:55.961315 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gskdg" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.005760 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gskdg"] Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.013729 4721 scope.go:117] "RemoveContainer" containerID="b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.020288 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gskdg"] Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.036263 4721 scope.go:117] "RemoveContainer" containerID="46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.127929 4721 scope.go:117] "RemoveContainer" containerID="b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7" Jan 28 19:26:56 crc kubenswrapper[4721]: E0128 19:26:56.128891 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7\": container with ID starting with b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7 not found: ID does not exist" containerID="b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.129016 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7"} err="failed to get container status \"b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7\": rpc error: code = NotFound desc = could not find container \"b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7\": container with ID starting with b69656d39d10cb2f4bb9a04b25549eeea759e164d1c882730baed821e91cfde7 not found: ID does not exist" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.129111 4721 scope.go:117] "RemoveContainer" containerID="b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d" Jan 28 19:26:56 crc kubenswrapper[4721]: E0128 19:26:56.130690 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d\": container with ID starting with b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d not found: ID does not exist" containerID="b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.130729 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d"} err="failed to get container status \"b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d\": rpc error: code = NotFound desc = could not find container \"b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d\": container with ID starting with b9a95739f4447a653cb304c57b3eec66389533701bb63a06810564e206fba48d not found: ID does not exist" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.130758 4721 scope.go:117] "RemoveContainer" containerID="46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159" Jan 28 19:26:56 crc kubenswrapper[4721]: E0128 19:26:56.131377 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159\": container with ID starting with 46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159 not found: ID does not exist" containerID="46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159" Jan 28 19:26:56 crc kubenswrapper[4721]: I0128 19:26:56.131474 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159"} err="failed to get container status \"46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159\": rpc error: code = NotFound desc = could not find container \"46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159\": container with ID starting with 46241175a43f8f913ee2022061c7e487113055e45b9812e2ff195777523b6159 not found: ID does not exist" Jan 28 19:26:57 crc kubenswrapper[4721]: I0128 19:26:57.541462 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55192242-ba42-479d-ab93-d44913f58182" path="/var/lib/kubelet/pods/55192242-ba42-479d-ab93-d44913f58182/volumes" Jan 28 19:27:02 crc kubenswrapper[4721]: I0128 19:27:02.528745 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:27:03 crc kubenswrapper[4721]: I0128 19:27:03.030215 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"2a8dbd2103baf01cd2e3c0f22907e06624428687f6924d4dfbf4bcb7ae35fa33"} Jan 28 19:29:31 crc kubenswrapper[4721]: I0128 19:29:31.225030 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:29:31 crc kubenswrapper[4721]: I0128 19:29:31.226047 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.162043 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r"] Jan 28 19:30:00 crc kubenswrapper[4721]: E0128 19:30:00.163221 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.163240 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4721]: E0128 19:30:00.163272 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="extract-content" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.163281 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="extract-content" Jan 28 19:30:00 crc kubenswrapper[4721]: E0128 19:30:00.163305 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="extract-utilities" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.163315 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="extract-utilities" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.163558 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="55192242-ba42-479d-ab93-d44913f58182" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.164926 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.168138 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.175284 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r"] Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.183597 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.319694 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfmnw\" (UniqueName: \"kubernetes.io/projected/166c17a5-c7fd-48bf-bbd9-36da41198eb1-kube-api-access-vfmnw\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.319842 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/166c17a5-c7fd-48bf-bbd9-36da41198eb1-secret-volume\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.319966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/166c17a5-c7fd-48bf-bbd9-36da41198eb1-config-volume\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.422034 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/166c17a5-c7fd-48bf-bbd9-36da41198eb1-secret-volume\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.422443 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/166c17a5-c7fd-48bf-bbd9-36da41198eb1-config-volume\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.422706 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfmnw\" (UniqueName: \"kubernetes.io/projected/166c17a5-c7fd-48bf-bbd9-36da41198eb1-kube-api-access-vfmnw\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.423446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/166c17a5-c7fd-48bf-bbd9-36da41198eb1-config-volume\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.427697 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/166c17a5-c7fd-48bf-bbd9-36da41198eb1-secret-volume\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.440003 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfmnw\" (UniqueName: \"kubernetes.io/projected/166c17a5-c7fd-48bf-bbd9-36da41198eb1-kube-api-access-vfmnw\") pod \"collect-profiles-29493810-f5k4r\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.489210 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:00 crc kubenswrapper[4721]: I0128 19:30:00.998611 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r"] Jan 28 19:30:01 crc kubenswrapper[4721]: I0128 19:30:01.224547 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:30:01 crc kubenswrapper[4721]: I0128 19:30:01.225128 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:30:01 crc kubenswrapper[4721]: I0128 19:30:01.930494 4721 generic.go:334] "Generic (PLEG): container finished" podID="166c17a5-c7fd-48bf-bbd9-36da41198eb1" containerID="3f0250f3480abffb6c3e849a486affaa5875e407c8adf0f3b35ec12c5ef03174" exitCode=0 Jan 28 19:30:01 crc kubenswrapper[4721]: I0128 19:30:01.930566 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" event={"ID":"166c17a5-c7fd-48bf-bbd9-36da41198eb1","Type":"ContainerDied","Data":"3f0250f3480abffb6c3e849a486affaa5875e407c8adf0f3b35ec12c5ef03174"} Jan 28 19:30:01 crc kubenswrapper[4721]: I0128 19:30:01.930831 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" event={"ID":"166c17a5-c7fd-48bf-bbd9-36da41198eb1","Type":"ContainerStarted","Data":"6c617b09db098804ff901953b5309e823b27b56c38646db66551c396a9b692ff"} Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.695001 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.808954 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/166c17a5-c7fd-48bf-bbd9-36da41198eb1-config-volume\") pod \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.809131 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfmnw\" (UniqueName: \"kubernetes.io/projected/166c17a5-c7fd-48bf-bbd9-36da41198eb1-kube-api-access-vfmnw\") pod \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.809249 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/166c17a5-c7fd-48bf-bbd9-36da41198eb1-secret-volume\") pod \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\" (UID: \"166c17a5-c7fd-48bf-bbd9-36da41198eb1\") " Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.809832 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/166c17a5-c7fd-48bf-bbd9-36da41198eb1-config-volume" (OuterVolumeSpecName: "config-volume") pod "166c17a5-c7fd-48bf-bbd9-36da41198eb1" (UID: "166c17a5-c7fd-48bf-bbd9-36da41198eb1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.816736 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/166c17a5-c7fd-48bf-bbd9-36da41198eb1-kube-api-access-vfmnw" (OuterVolumeSpecName: "kube-api-access-vfmnw") pod "166c17a5-c7fd-48bf-bbd9-36da41198eb1" (UID: "166c17a5-c7fd-48bf-bbd9-36da41198eb1"). InnerVolumeSpecName "kube-api-access-vfmnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.817012 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/166c17a5-c7fd-48bf-bbd9-36da41198eb1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "166c17a5-c7fd-48bf-bbd9-36da41198eb1" (UID: "166c17a5-c7fd-48bf-bbd9-36da41198eb1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.912526 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/166c17a5-c7fd-48bf-bbd9-36da41198eb1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.912571 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfmnw\" (UniqueName: \"kubernetes.io/projected/166c17a5-c7fd-48bf-bbd9-36da41198eb1-kube-api-access-vfmnw\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.912588 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/166c17a5-c7fd-48bf-bbd9-36da41198eb1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.953117 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" event={"ID":"166c17a5-c7fd-48bf-bbd9-36da41198eb1","Type":"ContainerDied","Data":"6c617b09db098804ff901953b5309e823b27b56c38646db66551c396a9b692ff"} Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.953426 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c617b09db098804ff901953b5309e823b27b56c38646db66551c396a9b692ff" Jan 28 19:30:03 crc kubenswrapper[4721]: I0128 19:30:03.953211 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-f5k4r" Jan 28 19:30:04 crc kubenswrapper[4721]: I0128 19:30:04.798241 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8"] Jan 28 19:30:04 crc kubenswrapper[4721]: I0128 19:30:04.812006 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-5hjw8"] Jan 28 19:30:05 crc kubenswrapper[4721]: I0128 19:30:05.543870 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16161beb-545f-4539-975b-4b48264e4189" path="/var/lib/kubelet/pods/16161beb-545f-4539-975b-4b48264e4189/volumes" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.695730 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-669gx"] Jan 28 19:30:17 crc kubenswrapper[4721]: E0128 19:30:17.696993 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="166c17a5-c7fd-48bf-bbd9-36da41198eb1" containerName="collect-profiles" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.697014 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="166c17a5-c7fd-48bf-bbd9-36da41198eb1" containerName="collect-profiles" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.697334 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="166c17a5-c7fd-48bf-bbd9-36da41198eb1" containerName="collect-profiles" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.699471 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.712983 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-669gx"] Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.752576 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n9r\" (UniqueName: \"kubernetes.io/projected/b8dcf794-db61-430e-bad2-704801f71715-kube-api-access-97n9r\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.752675 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-catalog-content\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.752704 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-utilities\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.855115 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97n9r\" (UniqueName: \"kubernetes.io/projected/b8dcf794-db61-430e-bad2-704801f71715-kube-api-access-97n9r\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.855400 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-catalog-content\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.855441 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-utilities\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.856056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-utilities\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.856056 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-catalog-content\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:17 crc kubenswrapper[4721]: I0128 19:30:17.895730 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97n9r\" (UniqueName: \"kubernetes.io/projected/b8dcf794-db61-430e-bad2-704801f71715-kube-api-access-97n9r\") pod \"redhat-operators-669gx\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:18 crc kubenswrapper[4721]: I0128 19:30:18.021834 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:18 crc kubenswrapper[4721]: I0128 19:30:18.638905 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-669gx"] Jan 28 19:30:19 crc kubenswrapper[4721]: I0128 19:30:19.133871 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerStarted","Data":"89b015da5a531623cea62129ef9da04c267359972234337eb1a3ed40b37f84d5"} Jan 28 19:30:20 crc kubenswrapper[4721]: I0128 19:30:20.154731 4721 generic.go:334] "Generic (PLEG): container finished" podID="b8dcf794-db61-430e-bad2-704801f71715" containerID="5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e" exitCode=0 Jan 28 19:30:20 crc kubenswrapper[4721]: I0128 19:30:20.154789 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerDied","Data":"5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e"} Jan 28 19:30:20 crc kubenswrapper[4721]: I0128 19:30:20.157942 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:30:21 crc kubenswrapper[4721]: I0128 19:30:21.167822 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerStarted","Data":"d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962"} Jan 28 19:30:21 crc kubenswrapper[4721]: I0128 19:30:21.230946 4721 scope.go:117] "RemoveContainer" containerID="892bfb296a65ce9869dc777c199aa356e653584535e7e0ec44f2ff7ba4c24f9b" Jan 28 19:30:27 crc kubenswrapper[4721]: I0128 19:30:27.238010 4721 generic.go:334] "Generic (PLEG): container finished" podID="b8dcf794-db61-430e-bad2-704801f71715" containerID="d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962" exitCode=0 Jan 28 19:30:27 crc kubenswrapper[4721]: I0128 19:30:27.238089 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerDied","Data":"d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962"} Jan 28 19:30:28 crc kubenswrapper[4721]: I0128 19:30:28.251065 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerStarted","Data":"b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e"} Jan 28 19:30:28 crc kubenswrapper[4721]: I0128 19:30:28.299666 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-669gx" podStartSLOduration=3.539050924 podStartE2EDuration="11.299638708s" podCreationTimestamp="2026-01-28 19:30:17 +0000 UTC" firstStartedPulling="2026-01-28 19:30:20.15766163 +0000 UTC m=+3385.882967190" lastFinishedPulling="2026-01-28 19:30:27.918249414 +0000 UTC m=+3393.643554974" observedRunningTime="2026-01-28 19:30:28.285508838 +0000 UTC m=+3394.010814398" watchObservedRunningTime="2026-01-28 19:30:28.299638708 +0000 UTC m=+3394.024944268" Jan 28 19:30:31 crc kubenswrapper[4721]: I0128 19:30:31.224722 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:30:31 crc kubenswrapper[4721]: I0128 19:30:31.226094 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:30:31 crc kubenswrapper[4721]: I0128 19:30:31.226249 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:30:31 crc kubenswrapper[4721]: I0128 19:30:31.227283 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a8dbd2103baf01cd2e3c0f22907e06624428687f6924d4dfbf4bcb7ae35fa33"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:30:31 crc kubenswrapper[4721]: I0128 19:30:31.227414 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://2a8dbd2103baf01cd2e3c0f22907e06624428687f6924d4dfbf4bcb7ae35fa33" gracePeriod=600 Jan 28 19:30:32 crc kubenswrapper[4721]: I0128 19:30:32.294543 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="2a8dbd2103baf01cd2e3c0f22907e06624428687f6924d4dfbf4bcb7ae35fa33" exitCode=0 Jan 28 19:30:32 crc kubenswrapper[4721]: I0128 19:30:32.294622 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"2a8dbd2103baf01cd2e3c0f22907e06624428687f6924d4dfbf4bcb7ae35fa33"} Jan 28 19:30:32 crc kubenswrapper[4721]: I0128 19:30:32.295088 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690"} Jan 28 19:30:32 crc kubenswrapper[4721]: I0128 19:30:32.295121 4721 scope.go:117] "RemoveContainer" containerID="88d5b07f7a18d2d549f667f34056676d38c608d96350f3168b3b595981c5c63b" Jan 28 19:30:38 crc kubenswrapper[4721]: I0128 19:30:38.022079 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:38 crc kubenswrapper[4721]: I0128 19:30:38.022816 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:30:39 crc kubenswrapper[4721]: I0128 19:30:39.077893 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-669gx" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" probeResult="failure" output=< Jan 28 19:30:39 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:30:39 crc kubenswrapper[4721]: > Jan 28 19:30:49 crc kubenswrapper[4721]: I0128 19:30:49.079977 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-669gx" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" probeResult="failure" output=< Jan 28 19:30:49 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:30:49 crc kubenswrapper[4721]: > Jan 28 19:30:59 crc kubenswrapper[4721]: I0128 19:30:59.072700 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-669gx" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" probeResult="failure" output=< Jan 28 19:30:59 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:30:59 crc kubenswrapper[4721]: > Jan 28 19:31:04 crc kubenswrapper[4721]: I0128 19:31:04.642276 4721 generic.go:334] "Generic (PLEG): container finished" podID="5e586424-d1f9-4f72-9dc8-f046e2f235f5" containerID="d865a9f8155c7c7b985db7878bfbd8f567cd20d8d6f437632b9434c0c74dd8c7" exitCode=0 Jan 28 19:31:04 crc kubenswrapper[4721]: I0128 19:31:04.642361 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e586424-d1f9-4f72-9dc8-f046e2f235f5","Type":"ContainerDied","Data":"d865a9f8155c7c7b985db7878bfbd8f567cd20d8d6f437632b9434c0c74dd8c7"} Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.246785 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.357965 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ca-certs\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358030 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ssh-key\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358083 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-config-data\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358139 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config-secret\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358238 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-workdir\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358293 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-temporary\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358372 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358448 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crn8k\" (UniqueName: \"kubernetes.io/projected/5e586424-d1f9-4f72-9dc8-f046e2f235f5-kube-api-access-crn8k\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.358529 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config\") pod \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\" (UID: \"5e586424-d1f9-4f72-9dc8-f046e2f235f5\") " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.362927 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-config-data" (OuterVolumeSpecName: "config-data") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.365318 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.370429 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e586424-d1f9-4f72-9dc8-f046e2f235f5-kube-api-access-crn8k" (OuterVolumeSpecName: "kube-api-access-crn8k") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "kube-api-access-crn8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.382384 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.392009 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.447018 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.454658 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.457134 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461162 4721 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461220 4721 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461230 4721 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461243 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461252 4721 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461286 4721 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461295 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crn8k\" (UniqueName: \"kubernetes.io/projected/5e586424-d1f9-4f72-9dc8-f046e2f235f5-kube-api-access-crn8k\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.461304 4721 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e586424-d1f9-4f72-9dc8-f046e2f235f5-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.482591 4721 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.564771 4721 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.674130 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e586424-d1f9-4f72-9dc8-f046e2f235f5","Type":"ContainerDied","Data":"dcad7dc726a17c8f33b0c2e099d3bef9d1203f0a9acc5cec6147e75237356a93"} Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.674505 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcad7dc726a17c8f33b0c2e099d3bef9d1203f0a9acc5cec6147e75237356a93" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.674590 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.852072 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5e586424-d1f9-4f72-9dc8-f046e2f235f5" (UID: "5e586424-d1f9-4f72-9dc8-f046e2f235f5"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:06 crc kubenswrapper[4721]: I0128 19:31:06.878666 4721 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e586424-d1f9-4f72-9dc8-f046e2f235f5-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.083975 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.137981 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.327975 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-669gx"] Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.449685 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 19:31:08 crc kubenswrapper[4721]: E0128 19:31:08.450255 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e586424-d1f9-4f72-9dc8-f046e2f235f5" containerName="tempest-tests-tempest-tests-runner" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.450275 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e586424-d1f9-4f72-9dc8-f046e2f235f5" containerName="tempest-tests-tempest-tests-runner" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.450487 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e586424-d1f9-4f72-9dc8-f046e2f235f5" containerName="tempest-tests-tempest-tests-runner" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.451448 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.454730 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2pmm2" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.467478 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.514806 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtlj\" (UniqueName: \"kubernetes.io/projected/512eb22d-5ddf-419c-aa72-60dea50ecc6d-kube-api-access-zgtlj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.514871 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.618126 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtlj\" (UniqueName: \"kubernetes.io/projected/512eb22d-5ddf-419c-aa72-60dea50ecc6d-kube-api-access-zgtlj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.618215 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.618771 4721 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.656318 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtlj\" (UniqueName: \"kubernetes.io/projected/512eb22d-5ddf-419c-aa72-60dea50ecc6d-kube-api-access-zgtlj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.661312 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"512eb22d-5ddf-419c-aa72-60dea50ecc6d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:08 crc kubenswrapper[4721]: I0128 19:31:08.773673 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 19:31:09 crc kubenswrapper[4721]: I0128 19:31:09.278859 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 19:31:09 crc kubenswrapper[4721]: I0128 19:31:09.707115 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"512eb22d-5ddf-419c-aa72-60dea50ecc6d","Type":"ContainerStarted","Data":"8220f7288b50747f522135a3bc606a507b78b6f443bf9e1a3c88d22487eed53e"} Jan 28 19:31:09 crc kubenswrapper[4721]: I0128 19:31:09.707833 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-669gx" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" containerID="cri-o://b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e" gracePeriod=2 Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.673964 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.748364 4721 generic.go:334] "Generic (PLEG): container finished" podID="b8dcf794-db61-430e-bad2-704801f71715" containerID="b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e" exitCode=0 Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.748436 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerDied","Data":"b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e"} Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.748475 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669gx" event={"ID":"b8dcf794-db61-430e-bad2-704801f71715","Type":"ContainerDied","Data":"89b015da5a531623cea62129ef9da04c267359972234337eb1a3ed40b37f84d5"} Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.748524 4721 scope.go:117] "RemoveContainer" containerID="b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.748814 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669gx" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.766484 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-utilities\") pod \"b8dcf794-db61-430e-bad2-704801f71715\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.766742 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-catalog-content\") pod \"b8dcf794-db61-430e-bad2-704801f71715\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.766783 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97n9r\" (UniqueName: \"kubernetes.io/projected/b8dcf794-db61-430e-bad2-704801f71715-kube-api-access-97n9r\") pod \"b8dcf794-db61-430e-bad2-704801f71715\" (UID: \"b8dcf794-db61-430e-bad2-704801f71715\") " Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.803869 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-utilities" (OuterVolumeSpecName: "utilities") pod "b8dcf794-db61-430e-bad2-704801f71715" (UID: "b8dcf794-db61-430e-bad2-704801f71715"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.805736 4721 scope.go:117] "RemoveContainer" containerID="d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.809020 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8dcf794-db61-430e-bad2-704801f71715-kube-api-access-97n9r" (OuterVolumeSpecName: "kube-api-access-97n9r") pod "b8dcf794-db61-430e-bad2-704801f71715" (UID: "b8dcf794-db61-430e-bad2-704801f71715"). InnerVolumeSpecName "kube-api-access-97n9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.871670 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.871818 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97n9r\" (UniqueName: \"kubernetes.io/projected/b8dcf794-db61-430e-bad2-704801f71715-kube-api-access-97n9r\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.926073 4721 scope.go:117] "RemoveContainer" containerID="5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.978878 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8dcf794-db61-430e-bad2-704801f71715" (UID: "b8dcf794-db61-430e-bad2-704801f71715"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.996415 4721 scope.go:117] "RemoveContainer" containerID="b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e" Jan 28 19:31:10 crc kubenswrapper[4721]: E0128 19:31:10.997820 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e\": container with ID starting with b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e not found: ID does not exist" containerID="b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.997875 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e"} err="failed to get container status \"b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e\": rpc error: code = NotFound desc = could not find container \"b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e\": container with ID starting with b6f2bfccfe83db7c5c4b60f8e8b373f955e2da8f58b9724282c251497978287e not found: ID does not exist" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.997914 4721 scope.go:117] "RemoveContainer" containerID="d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962" Jan 28 19:31:10 crc kubenswrapper[4721]: E0128 19:31:10.999501 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962\": container with ID starting with d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962 not found: ID does not exist" containerID="d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.999532 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962"} err="failed to get container status \"d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962\": rpc error: code = NotFound desc = could not find container \"d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962\": container with ID starting with d24f5c705bb3f07c90b9f9bd00af8d77e677cd48160fe96aad8573c36c006962 not found: ID does not exist" Jan 28 19:31:10 crc kubenswrapper[4721]: I0128 19:31:10.999550 4721 scope.go:117] "RemoveContainer" containerID="5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e" Jan 28 19:31:11 crc kubenswrapper[4721]: E0128 19:31:11.000090 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e\": container with ID starting with 5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e not found: ID does not exist" containerID="5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e" Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.000112 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e"} err="failed to get container status \"5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e\": rpc error: code = NotFound desc = could not find container \"5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e\": container with ID starting with 5d5957623bcc6d2e4f76efebe8dcceff8d0fe10f4a160521c586cea134ab7e5e not found: ID does not exist" Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.076132 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8dcf794-db61-430e-bad2-704801f71715-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.107919 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-669gx"] Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.123898 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-669gx"] Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.544768 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8dcf794-db61-430e-bad2-704801f71715" path="/var/lib/kubelet/pods/b8dcf794-db61-430e-bad2-704801f71715/volumes" Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.760392 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"512eb22d-5ddf-419c-aa72-60dea50ecc6d","Type":"ContainerStarted","Data":"e76f4a0f233ed524ec1bc08a8b857edf16cbaa8347e2ca63851e5ab34bf3812d"} Jan 28 19:31:11 crc kubenswrapper[4721]: I0128 19:31:11.789103 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.584090024 podStartE2EDuration="3.789079411s" podCreationTimestamp="2026-01-28 19:31:08 +0000 UTC" firstStartedPulling="2026-01-28 19:31:09.290997969 +0000 UTC m=+3435.016303529" lastFinishedPulling="2026-01-28 19:31:10.495987356 +0000 UTC m=+3436.221292916" observedRunningTime="2026-01-28 19:31:11.773688322 +0000 UTC m=+3437.498993882" watchObservedRunningTime="2026-01-28 19:31:11.789079411 +0000 UTC m=+3437.514384971" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.096473 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldp74/must-gather-lpvjh"] Jan 28 19:31:35 crc kubenswrapper[4721]: E0128 19:31:35.097498 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.097514 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" Jan 28 19:31:35 crc kubenswrapper[4721]: E0128 19:31:35.097546 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="extract-utilities" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.097554 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="extract-utilities" Jan 28 19:31:35 crc kubenswrapper[4721]: E0128 19:31:35.097567 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="extract-content" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.097575 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="extract-content" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.097798 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8dcf794-db61-430e-bad2-704801f71715" containerName="registry-server" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.099239 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.106628 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ldp74"/"default-dockercfg-64mqw" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.107037 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ldp74"/"openshift-service-ca.crt" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.107266 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ldp74"/"kube-root-ca.crt" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.122665 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldp74/must-gather-lpvjh"] Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.175298 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6lh\" (UniqueName: \"kubernetes.io/projected/a65ab672-e06c-477e-9826-b343a80c16bc-kube-api-access-cq6lh\") pod \"must-gather-lpvjh\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.175749 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a65ab672-e06c-477e-9826-b343a80c16bc-must-gather-output\") pod \"must-gather-lpvjh\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.278680 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6lh\" (UniqueName: \"kubernetes.io/projected/a65ab672-e06c-477e-9826-b343a80c16bc-kube-api-access-cq6lh\") pod \"must-gather-lpvjh\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.279464 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a65ab672-e06c-477e-9826-b343a80c16bc-must-gather-output\") pod \"must-gather-lpvjh\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.280341 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a65ab672-e06c-477e-9826-b343a80c16bc-must-gather-output\") pod \"must-gather-lpvjh\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.310127 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6lh\" (UniqueName: \"kubernetes.io/projected/a65ab672-e06c-477e-9826-b343a80c16bc-kube-api-access-cq6lh\") pod \"must-gather-lpvjh\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:35 crc kubenswrapper[4721]: I0128 19:31:35.425712 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:31:36 crc kubenswrapper[4721]: I0128 19:31:36.065901 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldp74/must-gather-lpvjh"] Jan 28 19:31:37 crc kubenswrapper[4721]: I0128 19:31:37.044480 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/must-gather-lpvjh" event={"ID":"a65ab672-e06c-477e-9826-b343a80c16bc","Type":"ContainerStarted","Data":"ee3c0166a23bd5e5fcbd2a4000917372096ca69f68733b506d448ef9bbf97313"} Jan 28 19:31:46 crc kubenswrapper[4721]: I0128 19:31:46.156722 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/must-gather-lpvjh" event={"ID":"a65ab672-e06c-477e-9826-b343a80c16bc","Type":"ContainerStarted","Data":"ee59c4b5bb8c4395de5b82d66551c1049b451144900416f5944c4deaae21eef3"} Jan 28 19:31:46 crc kubenswrapper[4721]: I0128 19:31:46.157337 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/must-gather-lpvjh" event={"ID":"a65ab672-e06c-477e-9826-b343a80c16bc","Type":"ContainerStarted","Data":"2d866b390d1b76dec9f4af78eedd58efb6770e34087c692d287a591c60e21133"} Jan 28 19:31:46 crc kubenswrapper[4721]: I0128 19:31:46.188304 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ldp74/must-gather-lpvjh" podStartSLOduration=2.252147849 podStartE2EDuration="11.188280583s" podCreationTimestamp="2026-01-28 19:31:35 +0000 UTC" firstStartedPulling="2026-01-28 19:31:36.081377448 +0000 UTC m=+3461.806683008" lastFinishedPulling="2026-01-28 19:31:45.017510182 +0000 UTC m=+3470.742815742" observedRunningTime="2026-01-28 19:31:46.182647577 +0000 UTC m=+3471.907953137" watchObservedRunningTime="2026-01-28 19:31:46.188280583 +0000 UTC m=+3471.913586143" Jan 28 19:31:48 crc kubenswrapper[4721]: E0128 19:31:48.752521 4721 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.66:48852->38.102.83.66:37489: read tcp 38.102.83.66:48852->38.102.83.66:37489: read: connection reset by peer Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.161984 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldp74/crc-debug-xczjp"] Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.163760 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.207134 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z6k7\" (UniqueName: \"kubernetes.io/projected/b66d50ac-fb49-4fdc-b26d-660273b04ae7-kube-api-access-5z6k7\") pod \"crc-debug-xczjp\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.207240 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b66d50ac-fb49-4fdc-b26d-660273b04ae7-host\") pod \"crc-debug-xczjp\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.308606 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z6k7\" (UniqueName: \"kubernetes.io/projected/b66d50ac-fb49-4fdc-b26d-660273b04ae7-kube-api-access-5z6k7\") pod \"crc-debug-xczjp\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.308656 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b66d50ac-fb49-4fdc-b26d-660273b04ae7-host\") pod \"crc-debug-xczjp\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.308888 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b66d50ac-fb49-4fdc-b26d-660273b04ae7-host\") pod \"crc-debug-xczjp\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.360292 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z6k7\" (UniqueName: \"kubernetes.io/projected/b66d50ac-fb49-4fdc-b26d-660273b04ae7-kube-api-access-5z6k7\") pod \"crc-debug-xczjp\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:50 crc kubenswrapper[4721]: I0128 19:31:50.485697 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:31:51 crc kubenswrapper[4721]: I0128 19:31:51.249402 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-xczjp" event={"ID":"b66d50ac-fb49-4fdc-b26d-660273b04ae7","Type":"ContainerStarted","Data":"069cf5100266810fa48a72fb992dc8542c3ef4a5d338761039577be881d43029"} Jan 28 19:32:06 crc kubenswrapper[4721]: E0128 19:32:06.368305 4721 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 28 19:32:06 crc kubenswrapper[4721]: E0128 19:32:06.369069 4721 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5z6k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-xczjp_openshift-must-gather-ldp74(b66d50ac-fb49-4fdc-b26d-660273b04ae7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 19:32:06 crc kubenswrapper[4721]: E0128 19:32:06.370502 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-ldp74/crc-debug-xczjp" podUID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" Jan 28 19:32:06 crc kubenswrapper[4721]: E0128 19:32:06.458993 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-ldp74/crc-debug-xczjp" podUID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" Jan 28 19:32:21 crc kubenswrapper[4721]: I0128 19:32:21.611084 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-xczjp" event={"ID":"b66d50ac-fb49-4fdc-b26d-660273b04ae7","Type":"ContainerStarted","Data":"9567f8bbf8b9036622a7b779077a53cc1189396a13b4583b2f673046ab01f190"} Jan 28 19:32:21 crc kubenswrapper[4721]: I0128 19:32:21.637150 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ldp74/crc-debug-xczjp" podStartSLOduration=1.169478422 podStartE2EDuration="31.63712033s" podCreationTimestamp="2026-01-28 19:31:50 +0000 UTC" firstStartedPulling="2026-01-28 19:31:50.534361916 +0000 UTC m=+3476.259667476" lastFinishedPulling="2026-01-28 19:32:21.002003814 +0000 UTC m=+3506.727309384" observedRunningTime="2026-01-28 19:32:21.625411715 +0000 UTC m=+3507.350717275" watchObservedRunningTime="2026-01-28 19:32:21.63712033 +0000 UTC m=+3507.362425890" Jan 28 19:32:31 crc kubenswrapper[4721]: I0128 19:32:31.226044 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:32:31 crc kubenswrapper[4721]: I0128 19:32:31.226586 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.755724 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w92"] Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.758854 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.778049 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w92"] Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.863917 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-catalog-content\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.864051 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-utilities\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.864096 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89jm\" (UniqueName: \"kubernetes.io/projected/c799c84d-7e90-4d40-87ee-b9d0522334a6-kube-api-access-b89jm\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.966246 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-catalog-content\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.966338 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-utilities\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.966377 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89jm\" (UniqueName: \"kubernetes.io/projected/c799c84d-7e90-4d40-87ee-b9d0522334a6-kube-api-access-b89jm\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.967385 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-utilities\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:53 crc kubenswrapper[4721]: I0128 19:32:53.967597 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-catalog-content\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:54 crc kubenswrapper[4721]: I0128 19:32:54.011121 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89jm\" (UniqueName: \"kubernetes.io/projected/c799c84d-7e90-4d40-87ee-b9d0522334a6-kube-api-access-b89jm\") pod \"redhat-marketplace-w7w92\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:54 crc kubenswrapper[4721]: I0128 19:32:54.087651 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:32:54 crc kubenswrapper[4721]: I0128 19:32:54.638810 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w92"] Jan 28 19:32:54 crc kubenswrapper[4721]: I0128 19:32:54.989691 4721 generic.go:334] "Generic (PLEG): container finished" podID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerID="0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f" exitCode=0 Jan 28 19:32:54 crc kubenswrapper[4721]: I0128 19:32:54.989785 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerDied","Data":"0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f"} Jan 28 19:32:54 crc kubenswrapper[4721]: I0128 19:32:54.990075 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerStarted","Data":"de57acdbec680041e43e3c5d8928d5de94fb0edb5c39397a4a9f6fb3181174c9"} Jan 28 19:32:56 crc kubenswrapper[4721]: I0128 19:32:56.010753 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerStarted","Data":"1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b"} Jan 28 19:32:57 crc kubenswrapper[4721]: I0128 19:32:57.024163 4721 generic.go:334] "Generic (PLEG): container finished" podID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerID="1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b" exitCode=0 Jan 28 19:32:57 crc kubenswrapper[4721]: I0128 19:32:57.024274 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerDied","Data":"1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b"} Jan 28 19:32:58 crc kubenswrapper[4721]: I0128 19:32:58.041029 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerStarted","Data":"b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a"} Jan 28 19:32:58 crc kubenswrapper[4721]: I0128 19:32:58.071930 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w7w92" podStartSLOduration=2.613508098 podStartE2EDuration="5.071907074s" podCreationTimestamp="2026-01-28 19:32:53 +0000 UTC" firstStartedPulling="2026-01-28 19:32:54.992064328 +0000 UTC m=+3540.717369888" lastFinishedPulling="2026-01-28 19:32:57.450463304 +0000 UTC m=+3543.175768864" observedRunningTime="2026-01-28 19:32:58.060452196 +0000 UTC m=+3543.785757756" watchObservedRunningTime="2026-01-28 19:32:58.071907074 +0000 UTC m=+3543.797212624" Jan 28 19:33:01 crc kubenswrapper[4721]: I0128 19:33:01.224679 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:33:01 crc kubenswrapper[4721]: I0128 19:33:01.225300 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:04 crc kubenswrapper[4721]: I0128 19:33:04.087983 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:33:04 crc kubenswrapper[4721]: I0128 19:33:04.088394 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:33:04 crc kubenswrapper[4721]: I0128 19:33:04.154782 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:33:04 crc kubenswrapper[4721]: I0128 19:33:04.210250 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:33:04 crc kubenswrapper[4721]: I0128 19:33:04.397104 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w92"] Jan 28 19:33:06 crc kubenswrapper[4721]: I0128 19:33:06.122104 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w7w92" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="registry-server" containerID="cri-o://b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a" gracePeriod=2 Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.011843 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.122436 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-catalog-content\") pod \"c799c84d-7e90-4d40-87ee-b9d0522334a6\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.122568 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b89jm\" (UniqueName: \"kubernetes.io/projected/c799c84d-7e90-4d40-87ee-b9d0522334a6-kube-api-access-b89jm\") pod \"c799c84d-7e90-4d40-87ee-b9d0522334a6\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.122697 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-utilities\") pod \"c799c84d-7e90-4d40-87ee-b9d0522334a6\" (UID: \"c799c84d-7e90-4d40-87ee-b9d0522334a6\") " Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.123694 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-utilities" (OuterVolumeSpecName: "utilities") pod "c799c84d-7e90-4d40-87ee-b9d0522334a6" (UID: "c799c84d-7e90-4d40-87ee-b9d0522334a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.132574 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c799c84d-7e90-4d40-87ee-b9d0522334a6-kube-api-access-b89jm" (OuterVolumeSpecName: "kube-api-access-b89jm") pod "c799c84d-7e90-4d40-87ee-b9d0522334a6" (UID: "c799c84d-7e90-4d40-87ee-b9d0522334a6"). InnerVolumeSpecName "kube-api-access-b89jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.135711 4721 generic.go:334] "Generic (PLEG): container finished" podID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerID="b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a" exitCode=0 Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.135846 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerDied","Data":"b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a"} Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.135885 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w7w92" event={"ID":"c799c84d-7e90-4d40-87ee-b9d0522334a6","Type":"ContainerDied","Data":"de57acdbec680041e43e3c5d8928d5de94fb0edb5c39397a4a9f6fb3181174c9"} Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.135906 4721 scope.go:117] "RemoveContainer" containerID="b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.136119 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w7w92" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.151883 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c799c84d-7e90-4d40-87ee-b9d0522334a6" (UID: "c799c84d-7e90-4d40-87ee-b9d0522334a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.199592 4721 scope.go:117] "RemoveContainer" containerID="1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.225394 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b89jm\" (UniqueName: \"kubernetes.io/projected/c799c84d-7e90-4d40-87ee-b9d0522334a6-kube-api-access-b89jm\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.225679 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.225766 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c799c84d-7e90-4d40-87ee-b9d0522334a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.247019 4721 scope.go:117] "RemoveContainer" containerID="0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.320860 4721 scope.go:117] "RemoveContainer" containerID="b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a" Jan 28 19:33:07 crc kubenswrapper[4721]: E0128 19:33:07.321475 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a\": container with ID starting with b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a not found: ID does not exist" containerID="b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.321586 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a"} err="failed to get container status \"b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a\": rpc error: code = NotFound desc = could not find container \"b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a\": container with ID starting with b116699b70d0cec3d8c1c4e835d023c770509caf9c7c2ba266db2d304c32291a not found: ID does not exist" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.321619 4721 scope.go:117] "RemoveContainer" containerID="1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b" Jan 28 19:33:07 crc kubenswrapper[4721]: E0128 19:33:07.322079 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b\": container with ID starting with 1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b not found: ID does not exist" containerID="1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.322145 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b"} err="failed to get container status \"1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b\": rpc error: code = NotFound desc = could not find container \"1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b\": container with ID starting with 1e43f69a206d79dc03bebaff242a710f2a7678fdd4fbac58b7db88c11090d30b not found: ID does not exist" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.322192 4721 scope.go:117] "RemoveContainer" containerID="0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f" Jan 28 19:33:07 crc kubenswrapper[4721]: E0128 19:33:07.322792 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f\": container with ID starting with 0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f not found: ID does not exist" containerID="0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.322820 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f"} err="failed to get container status \"0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f\": rpc error: code = NotFound desc = could not find container \"0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f\": container with ID starting with 0eae19ad3990d90d518750e0c5ff0d214c707d57b3579f73d570f9459776d32f not found: ID does not exist" Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.480996 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w92"] Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.492994 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w7w92"] Jan 28 19:33:07 crc kubenswrapper[4721]: I0128 19:33:07.541941 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" path="/var/lib/kubelet/pods/c799c84d-7e90-4d40-87ee-b9d0522334a6/volumes" Jan 28 19:33:21 crc kubenswrapper[4721]: I0128 19:33:21.396296 4721 scope.go:117] "RemoveContainer" containerID="619dbf0d5cff0142912d9de21cd762ffb563d3efc2da35faabf51affe1739ecd" Jan 28 19:33:21 crc kubenswrapper[4721]: I0128 19:33:21.455470 4721 scope.go:117] "RemoveContainer" containerID="d0343257e28a42da6691165a16c187b26974dabfb1cb0f294f72151b7e8e92ca" Jan 28 19:33:21 crc kubenswrapper[4721]: I0128 19:33:21.486715 4721 scope.go:117] "RemoveContainer" containerID="e775eb1f4b568c207e73c36e8cce935a8024974fd5a379f7a302844103eb9f51" Jan 28 19:33:26 crc kubenswrapper[4721]: I0128 19:33:26.797603 4721 generic.go:334] "Generic (PLEG): container finished" podID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" containerID="9567f8bbf8b9036622a7b779077a53cc1189396a13b4583b2f673046ab01f190" exitCode=0 Jan 28 19:33:26 crc kubenswrapper[4721]: I0128 19:33:26.797776 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-xczjp" event={"ID":"b66d50ac-fb49-4fdc-b26d-660273b04ae7","Type":"ContainerDied","Data":"9567f8bbf8b9036622a7b779077a53cc1189396a13b4583b2f673046ab01f190"} Jan 28 19:33:27 crc kubenswrapper[4721]: I0128 19:33:27.975064 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.018049 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ldp74/crc-debug-xczjp"] Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.028366 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ldp74/crc-debug-xczjp"] Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.055198 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b66d50ac-fb49-4fdc-b26d-660273b04ae7-host\") pod \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.055326 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b66d50ac-fb49-4fdc-b26d-660273b04ae7-host" (OuterVolumeSpecName: "host") pod "b66d50ac-fb49-4fdc-b26d-660273b04ae7" (UID: "b66d50ac-fb49-4fdc-b26d-660273b04ae7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.055816 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z6k7\" (UniqueName: \"kubernetes.io/projected/b66d50ac-fb49-4fdc-b26d-660273b04ae7-kube-api-access-5z6k7\") pod \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\" (UID: \"b66d50ac-fb49-4fdc-b26d-660273b04ae7\") " Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.056306 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b66d50ac-fb49-4fdc-b26d-660273b04ae7-host\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.063752 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66d50ac-fb49-4fdc-b26d-660273b04ae7-kube-api-access-5z6k7" (OuterVolumeSpecName: "kube-api-access-5z6k7") pod "b66d50ac-fb49-4fdc-b26d-660273b04ae7" (UID: "b66d50ac-fb49-4fdc-b26d-660273b04ae7"). InnerVolumeSpecName "kube-api-access-5z6k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.158800 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z6k7\" (UniqueName: \"kubernetes.io/projected/b66d50ac-fb49-4fdc-b26d-660273b04ae7-kube-api-access-5z6k7\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.822515 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069cf5100266810fa48a72fb992dc8542c3ef4a5d338761039577be881d43029" Jan 28 19:33:28 crc kubenswrapper[4721]: I0128 19:33:28.822652 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-xczjp" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.214645 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldp74/crc-debug-tptjm"] Jan 28 19:33:29 crc kubenswrapper[4721]: E0128 19:33:29.215212 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" containerName="container-00" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.215227 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" containerName="container-00" Jan 28 19:33:29 crc kubenswrapper[4721]: E0128 19:33:29.215244 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="registry-server" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.215252 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="registry-server" Jan 28 19:33:29 crc kubenswrapper[4721]: E0128 19:33:29.215268 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="extract-utilities" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.215276 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="extract-utilities" Jan 28 19:33:29 crc kubenswrapper[4721]: E0128 19:33:29.215315 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="extract-content" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.215323 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="extract-content" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.215582 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="c799c84d-7e90-4d40-87ee-b9d0522334a6" containerName="registry-server" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.215594 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" containerName="container-00" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.216540 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.287262 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-host\") pod \"crc-debug-tptjm\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.287385 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4tf\" (UniqueName: \"kubernetes.io/projected/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-kube-api-access-5m4tf\") pod \"crc-debug-tptjm\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.390128 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4tf\" (UniqueName: \"kubernetes.io/projected/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-kube-api-access-5m4tf\") pod \"crc-debug-tptjm\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.390367 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-host\") pod \"crc-debug-tptjm\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.390504 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-host\") pod \"crc-debug-tptjm\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.411990 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4tf\" (UniqueName: \"kubernetes.io/projected/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-kube-api-access-5m4tf\") pod \"crc-debug-tptjm\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.536915 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.542123 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66d50ac-fb49-4fdc-b26d-660273b04ae7" path="/var/lib/kubelet/pods/b66d50ac-fb49-4fdc-b26d-660273b04ae7/volumes" Jan 28 19:33:29 crc kubenswrapper[4721]: I0128 19:33:29.834302 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-tptjm" event={"ID":"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c","Type":"ContainerStarted","Data":"177774e43f6416b14f54687b16529399fd184a51ddd62397e116228956d7b027"} Jan 28 19:33:30 crc kubenswrapper[4721]: I0128 19:33:30.847868 4721 generic.go:334] "Generic (PLEG): container finished" podID="eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" containerID="356fb23de40512c1217a70147972dd7b68667df4af3816691da87c9f8e56e99f" exitCode=0 Jan 28 19:33:30 crc kubenswrapper[4721]: I0128 19:33:30.848056 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-tptjm" event={"ID":"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c","Type":"ContainerDied","Data":"356fb23de40512c1217a70147972dd7b68667df4af3816691da87c9f8e56e99f"} Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.224702 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.224770 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.224837 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.225940 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.225997 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" gracePeriod=600 Jan 28 19:33:31 crc kubenswrapper[4721]: E0128 19:33:31.381904 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.862065 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" exitCode=0 Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.862140 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690"} Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.862565 4721 scope.go:117] "RemoveContainer" containerID="2a8dbd2103baf01cd2e3c0f22907e06624428687f6924d4dfbf4bcb7ae35fa33" Jan 28 19:33:31 crc kubenswrapper[4721]: I0128 19:33:31.863223 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:33:31 crc kubenswrapper[4721]: E0128 19:33:31.863589 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.011011 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.175220 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-host\") pod \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.175711 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m4tf\" (UniqueName: \"kubernetes.io/projected/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-kube-api-access-5m4tf\") pod \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\" (UID: \"eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c\") " Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.176224 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-host" (OuterVolumeSpecName: "host") pod "eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" (UID: "eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.217289 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-kube-api-access-5m4tf" (OuterVolumeSpecName: "kube-api-access-5m4tf") pod "eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" (UID: "eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c"). InnerVolumeSpecName "kube-api-access-5m4tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.278426 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m4tf\" (UniqueName: \"kubernetes.io/projected/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-kube-api-access-5m4tf\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.278465 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c-host\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.602888 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ldp74/crc-debug-tptjm"] Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.612046 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ldp74/crc-debug-tptjm"] Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.887697 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177774e43f6416b14f54687b16529399fd184a51ddd62397e116228956d7b027" Jan 28 19:33:32 crc kubenswrapper[4721]: I0128 19:33:32.887949 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-tptjm" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.543462 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" path="/var/lib/kubelet/pods/eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c/volumes" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.805437 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldp74/crc-debug-2wpzw"] Jan 28 19:33:33 crc kubenswrapper[4721]: E0128 19:33:33.805978 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" containerName="container-00" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.806001 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" containerName="container-00" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.806238 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda4f7b2-8bab-4745-a1a1-f941bf8cfa1c" containerName="container-00" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.807284 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.923966 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc82p\" (UniqueName: \"kubernetes.io/projected/a39045a0-f9cc-4182-8a65-6041b3fa4f25-kube-api-access-zc82p\") pod \"crc-debug-2wpzw\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:33 crc kubenswrapper[4721]: I0128 19:33:33.924023 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a39045a0-f9cc-4182-8a65-6041b3fa4f25-host\") pod \"crc-debug-2wpzw\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.026527 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc82p\" (UniqueName: \"kubernetes.io/projected/a39045a0-f9cc-4182-8a65-6041b3fa4f25-kube-api-access-zc82p\") pod \"crc-debug-2wpzw\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.026589 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a39045a0-f9cc-4182-8a65-6041b3fa4f25-host\") pod \"crc-debug-2wpzw\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.026889 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a39045a0-f9cc-4182-8a65-6041b3fa4f25-host\") pod \"crc-debug-2wpzw\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.050811 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc82p\" (UniqueName: \"kubernetes.io/projected/a39045a0-f9cc-4182-8a65-6041b3fa4f25-kube-api-access-zc82p\") pod \"crc-debug-2wpzw\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.131087 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:34 crc kubenswrapper[4721]: W0128 19:33:34.163052 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda39045a0_f9cc_4182_8a65_6041b3fa4f25.slice/crio-ff35e44efc337a086763f7491959e8d0709fd4cff87d5a1560821b680834c9a6 WatchSource:0}: Error finding container ff35e44efc337a086763f7491959e8d0709fd4cff87d5a1560821b680834c9a6: Status 404 returned error can't find the container with id ff35e44efc337a086763f7491959e8d0709fd4cff87d5a1560821b680834c9a6 Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.909786 4721 generic.go:334] "Generic (PLEG): container finished" podID="a39045a0-f9cc-4182-8a65-6041b3fa4f25" containerID="ea78bdd5b45a7d91ed5fcd9614d1deab6ae5913b3acdd638bf3a1b0fc1836ce3" exitCode=0 Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.909875 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-2wpzw" event={"ID":"a39045a0-f9cc-4182-8a65-6041b3fa4f25","Type":"ContainerDied","Data":"ea78bdd5b45a7d91ed5fcd9614d1deab6ae5913b3acdd638bf3a1b0fc1836ce3"} Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.910211 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/crc-debug-2wpzw" event={"ID":"a39045a0-f9cc-4182-8a65-6041b3fa4f25","Type":"ContainerStarted","Data":"ff35e44efc337a086763f7491959e8d0709fd4cff87d5a1560821b680834c9a6"} Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.954103 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ldp74/crc-debug-2wpzw"] Jan 28 19:33:34 crc kubenswrapper[4721]: I0128 19:33:34.964506 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ldp74/crc-debug-2wpzw"] Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.055595 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.180017 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc82p\" (UniqueName: \"kubernetes.io/projected/a39045a0-f9cc-4182-8a65-6041b3fa4f25-kube-api-access-zc82p\") pod \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.180296 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a39045a0-f9cc-4182-8a65-6041b3fa4f25-host\") pod \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\" (UID: \"a39045a0-f9cc-4182-8a65-6041b3fa4f25\") " Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.180453 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a39045a0-f9cc-4182-8a65-6041b3fa4f25-host" (OuterVolumeSpecName: "host") pod "a39045a0-f9cc-4182-8a65-6041b3fa4f25" (UID: "a39045a0-f9cc-4182-8a65-6041b3fa4f25"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.181079 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a39045a0-f9cc-4182-8a65-6041b3fa4f25-host\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.186793 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39045a0-f9cc-4182-8a65-6041b3fa4f25-kube-api-access-zc82p" (OuterVolumeSpecName: "kube-api-access-zc82p") pod "a39045a0-f9cc-4182-8a65-6041b3fa4f25" (UID: "a39045a0-f9cc-4182-8a65-6041b3fa4f25"). InnerVolumeSpecName "kube-api-access-zc82p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.283326 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc82p\" (UniqueName: \"kubernetes.io/projected/a39045a0-f9cc-4182-8a65-6041b3fa4f25-kube-api-access-zc82p\") on node \"crc\" DevicePath \"\"" Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.930622 4721 scope.go:117] "RemoveContainer" containerID="ea78bdd5b45a7d91ed5fcd9614d1deab6ae5913b3acdd638bf3a1b0fc1836ce3" Jan 28 19:33:36 crc kubenswrapper[4721]: I0128 19:33:36.930702 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/crc-debug-2wpzw" Jan 28 19:33:37 crc kubenswrapper[4721]: I0128 19:33:37.543098 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39045a0-f9cc-4182-8a65-6041b3fa4f25" path="/var/lib/kubelet/pods/a39045a0-f9cc-4182-8a65-6041b3fa4f25/volumes" Jan 28 19:33:44 crc kubenswrapper[4721]: I0128 19:33:44.530483 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:33:44 crc kubenswrapper[4721]: E0128 19:33:44.531507 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:33:59 crc kubenswrapper[4721]: I0128 19:33:59.529116 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:33:59 crc kubenswrapper[4721]: E0128 19:33:59.530105 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:34:04 crc kubenswrapper[4721]: I0128 19:34:04.610757 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/init-config-reloader/0.log" Jan 28 19:34:04 crc kubenswrapper[4721]: I0128 19:34:04.907746 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/config-reloader/0.log" Jan 28 19:34:04 crc kubenswrapper[4721]: I0128 19:34:04.934214 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/init-config-reloader/0.log" Jan 28 19:34:04 crc kubenswrapper[4721]: I0128 19:34:04.978840 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/alertmanager/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.180483 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cfc4cd674-j5vfc_f8eb94ee-887b-48f2-808c-2b634928d62e/barbican-api-log/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.193452 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cfc4cd674-j5vfc_f8eb94ee-887b-48f2-808c-2b634928d62e/barbican-api/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.261444 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f8b48b786-fcdpx_b950ce3b-33ce-40a9-9b76-45470b0917ec/barbican-keystone-listener/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.504065 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7855694cbf-6fbkc_7ae24f09-1a88-4cd4-8959-76b14602141d/barbican-worker-log/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.540806 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7855694cbf-6fbkc_7ae24f09-1a88-4cd4-8959-76b14602141d/barbican-worker/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.564332 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f8b48b786-fcdpx_b950ce3b-33ce-40a9-9b76-45470b0917ec/barbican-keystone-listener-log/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.769722 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-sw887_aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:05 crc kubenswrapper[4721]: I0128 19:34:05.918061 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/ceilometer-central-agent/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.070076 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/ceilometer-notification-agent/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.110994 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/sg-core/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.125679 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/proxy-httpd/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.390899 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a5090535-3282-4e69-988d-be91fd8908a2/cinder-api/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.401281 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a5090535-3282-4e69-988d-be91fd8908a2/cinder-api-log/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.636476 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a3d49781-0039-466d-b00e-1d7f28598b88/probe/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.637579 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a3d49781-0039-466d-b00e-1d7f28598b88/cinder-scheduler/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.756270 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd/cloudkitty-api/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.923513 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd/cloudkitty-api-log/0.log" Jan 28 19:34:06 crc kubenswrapper[4721]: I0128 19:34:06.992957 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_22863ebc-7f06-4697-a494-1e854030c803/loki-compactor/0.log" Jan 28 19:34:07 crc kubenswrapper[4721]: I0128 19:34:07.441127 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-gzhlc_600f989b-3ac6-4fe8-9848-6b80319e8c66/loki-distributor/0.log" Jan 28 19:34:07 crc kubenswrapper[4721]: I0128 19:34:07.535928 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-b6984_dffa61ba-c98d-446a-a4d0-34e1e15a093b/gateway/0.log" Jan 28 19:34:07 crc kubenswrapper[4721]: I0128 19:34:07.736239 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-t9249_ded95a77-cbf2-4db7-b6b4-56fdf518717c/gateway/0.log" Jan 28 19:34:07 crc kubenswrapper[4721]: I0128 19:34:07.910718 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_e06ee4ac-7688-41ae-b0f0-13e7cfc042e7/loki-index-gateway/0.log" Jan 28 19:34:08 crc kubenswrapper[4721]: I0128 19:34:08.172334 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_742e65f6-66eb-4334-9328-b77d47d420d0/loki-ingester/0.log" Jan 28 19:34:08 crc kubenswrapper[4721]: I0128 19:34:08.361475 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-cd79j_6be2127c-76cf-41fb-99d2-28a4e10a2b03/loki-query-frontend/0.log" Jan 28 19:34:08 crc kubenswrapper[4721]: I0128 19:34:08.448657 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-4gfwq_cd76eab6-6d1b-4d6b-9c42-3e667e081ce6/loki-querier/0.log" Jan 28 19:34:08 crc kubenswrapper[4721]: I0128 19:34:08.916585 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7_b9946ce2-5895-4b1a-ad88-c80a26d23265/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:09 crc kubenswrapper[4721]: I0128 19:34:09.129781 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n_4d206415-b580-4e09-a6f5-715ea9c2ff06/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:09 crc kubenswrapper[4721]: I0128 19:34:09.386437 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-zg2ch_a2de6f20-e053-456e-860d-c85c1ae57874/init/0.log" Jan 28 19:34:09 crc kubenswrapper[4721]: I0128 19:34:09.606321 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-zg2ch_a2de6f20-e053-456e-860d-c85c1ae57874/init/0.log" Jan 28 19:34:09 crc kubenswrapper[4721]: I0128 19:34:09.732509 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-zg2ch_a2de6f20-e053-456e-860d-c85c1ae57874/dnsmasq-dns/0.log" Jan 28 19:34:09 crc kubenswrapper[4721]: I0128 19:34:09.882423 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-89xnq_df3fe0a6-94e7-4233-9fb8-cecad5bc5266/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:10 crc kubenswrapper[4721]: I0128 19:34:10.037916 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d/glance-httpd/0.log" Jan 28 19:34:10 crc kubenswrapper[4721]: I0128 19:34:10.116597 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d/glance-log/0.log" Jan 28 19:34:10 crc kubenswrapper[4721]: I0128 19:34:10.333044 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9/glance-httpd/0.log" Jan 28 19:34:10 crc kubenswrapper[4721]: I0128 19:34:10.429572 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9/glance-log/0.log" Jan 28 19:34:10 crc kubenswrapper[4721]: I0128 19:34:10.503616 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-px4d2_e6d48255-8474-4c70-afc7-ddda7df2ff65/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:10 crc kubenswrapper[4721]: I0128 19:34:10.697524 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pqbq8_240f3ed6-78d3-4839-9d63-71e54d447a8a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:11 crc kubenswrapper[4721]: I0128 19:34:11.025145 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493781-lgwjg_16b77be6-6887-4534-a5e9-fc53746e8bde/keystone-cron/0.log" Jan 28 19:34:11 crc kubenswrapper[4721]: I0128 19:34:11.329537 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7fccf8d9d-jqxpt_b596f4de-be4e-4c2a-8524-fca9afc03775/keystone-api/0.log" Jan 28 19:34:11 crc kubenswrapper[4721]: I0128 19:34:11.340234 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7cb3ca8e-a112-4fa7-a165-f987728ac08f/kube-state-metrics/0.log" Jan 28 19:34:11 crc kubenswrapper[4721]: I0128 19:34:11.631861 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-s49zh_349859e1-1716-4304-9352-b9caa4c046be/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:12 crc kubenswrapper[4721]: I0128 19:34:12.133809 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787c88cc7-8262p_778b4bd0-5ac3-4a89-b5c8-07f3f52e5804/neutron-httpd/0.log" Jan 28 19:34:12 crc kubenswrapper[4721]: I0128 19:34:12.309374 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787c88cc7-8262p_778b4bd0-5ac3-4a89-b5c8-07f3f52e5804/neutron-api/0.log" Jan 28 19:34:12 crc kubenswrapper[4721]: I0128 19:34:12.467617 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4_7004522f-8584-4fca-851b-1d9f9195cb0d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:12 crc kubenswrapper[4721]: I0128 19:34:12.529952 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:34:12 crc kubenswrapper[4721]: E0128 19:34:12.530370 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:34:13 crc kubenswrapper[4721]: I0128 19:34:13.167782 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4898ad56-ee48-4c94-846a-cb0c2af32da7/nova-api-log/0.log" Jan 28 19:34:13 crc kubenswrapper[4721]: I0128 19:34:13.388924 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_977400c1-f351-4271-b494-25c1bd6dd31f/nova-cell0-conductor-conductor/0.log" Jan 28 19:34:13 crc kubenswrapper[4721]: I0128 19:34:13.424768 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4898ad56-ee48-4c94-846a-cb0c2af32da7/nova-api-api/0.log" Jan 28 19:34:13 crc kubenswrapper[4721]: I0128 19:34:13.805230 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d175789e-d718-4022-86ac-b8b1f9f1d40c/nova-cell1-conductor-conductor/0.log" Jan 28 19:34:13 crc kubenswrapper[4721]: I0128 19:34:13.855578 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_623ce0b7-2228-4d75-a8c3-48a837fccf46/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 19:34:14 crc kubenswrapper[4721]: I0128 19:34:14.115558 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6fthv_8dcae945-3742-46b5-b6ac-c8ff95e2946e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:14 crc kubenswrapper[4721]: I0128 19:34:14.426836 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f5877169-6d6b-4a83-a58d-b885ede23ffb/nova-metadata-log/0.log" Jan 28 19:34:14 crc kubenswrapper[4721]: I0128 19:34:14.888677 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ac328e3e-730d-4617-bf12-8ad6a4c5e9bf/nova-scheduler-scheduler/0.log" Jan 28 19:34:15 crc kubenswrapper[4721]: I0128 19:34:15.125598 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_00b26873-8c7a-4ea7-b334-873b01cc5d84/mysql-bootstrap/0.log" Jan 28 19:34:15 crc kubenswrapper[4721]: I0128 19:34:15.312456 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_00b26873-8c7a-4ea7-b334-873b01cc5d84/mysql-bootstrap/0.log" Jan 28 19:34:15 crc kubenswrapper[4721]: I0128 19:34:15.416442 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_00b26873-8c7a-4ea7-b334-873b01cc5d84/galera/0.log" Jan 28 19:34:15 crc kubenswrapper[4721]: I0128 19:34:15.794609 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0e740af0-cd0c-4f3e-8be1-facce1656583/mysql-bootstrap/0.log" Jan 28 19:34:16 crc kubenswrapper[4721]: I0128 19:34:16.036013 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0e740af0-cd0c-4f3e-8be1-facce1656583/galera/0.log" Jan 28 19:34:16 crc kubenswrapper[4721]: I0128 19:34:16.082758 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f5877169-6d6b-4a83-a58d-b885ede23ffb/nova-metadata-metadata/0.log" Jan 28 19:34:16 crc kubenswrapper[4721]: I0128 19:34:16.086637 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0e740af0-cd0c-4f3e-8be1-facce1656583/mysql-bootstrap/0.log" Jan 28 19:34:16 crc kubenswrapper[4721]: I0128 19:34:16.369968 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_85f51b69-4069-4da4-895c-0f92ad51506c/openstackclient/0.log" Jan 28 19:34:16 crc kubenswrapper[4721]: I0128 19:34:16.591544 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dmttf_bacb5ba4-39a7-4774-818d-67453153a34f/openstack-network-exporter/0.log" Jan 28 19:34:16 crc kubenswrapper[4721]: I0128 19:34:16.755423 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovsdb-server-init/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.041135 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovs-vswitchd/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.041878 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovsdb-server-init/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.142008 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovsdb-server/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.352137 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sbclw_c391bae1-d3a9-4ccd-a868-d7263d9b0a28/ovn-controller/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.662835 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-lrdl9_445fc577-89a5-4f74-b7a4-65979c88af6b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.801560 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5296300e-265b-4671-a299-e023295c6981/openstack-network-exporter/0.log" Jan 28 19:34:17 crc kubenswrapper[4721]: I0128 19:34:17.904908 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5296300e-265b-4671-a299-e023295c6981/ovn-northd/0.log" Jan 28 19:34:18 crc kubenswrapper[4721]: I0128 19:34:18.099582 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4e58913-334f-484a-8e7d-e1ac86753dbe/openstack-network-exporter/0.log" Jan 28 19:34:18 crc kubenswrapper[4721]: I0128 19:34:18.243916 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4e58913-334f-484a-8e7d-e1ac86753dbe/ovsdbserver-nb/0.log" Jan 28 19:34:18 crc kubenswrapper[4721]: I0128 19:34:18.419570 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_284cf569-7d31-465c-9189-05f80f168989/openstack-network-exporter/0.log" Jan 28 19:34:19 crc kubenswrapper[4721]: I0128 19:34:19.089862 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_284cf569-7d31-465c-9189-05f80f168989/ovsdbserver-sb/0.log" Jan 28 19:34:19 crc kubenswrapper[4721]: I0128 19:34:19.231884 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7b5b4f6d96-q5gf8_7bc6f4fc-8f67-4a04-83f7-551efe61e4fe/placement-api/0.log" Jan 28 19:34:19 crc kubenswrapper[4721]: I0128 19:34:19.703953 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/init-config-reloader/0.log" Jan 28 19:34:19 crc kubenswrapper[4721]: I0128 19:34:19.721597 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7b5b4f6d96-q5gf8_7bc6f4fc-8f67-4a04-83f7-551efe61e4fe/placement-log/0.log" Jan 28 19:34:19 crc kubenswrapper[4721]: I0128 19:34:19.981042 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/prometheus/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.008957 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/init-config-reloader/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.032131 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/config-reloader/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.123861 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_52682601-9d4b-4b45-a1e0-7143e9a31e7a/cloudkitty-proc/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.272464 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/thanos-sidecar/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.277683 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a493b27e-e634-4b09-ae05-2a69c5ad0d68/setup-container/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.562040 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a493b27e-e634-4b09-ae05-2a69c5ad0d68/rabbitmq/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.623139 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a493b27e-e634-4b09-ae05-2a69c5ad0d68/setup-container/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.824766 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_88f1129c-54fc-423a-993d-560aecdd75eb/setup-container/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.977098 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_88f1129c-54fc-423a-993d-560aecdd75eb/setup-container/0.log" Jan 28 19:34:20 crc kubenswrapper[4721]: I0128 19:34:20.999906 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_88f1129c-54fc-423a-993d-560aecdd75eb/rabbitmq/0.log" Jan 28 19:34:21 crc kubenswrapper[4721]: I0128 19:34:21.057917 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc_5dc69ebb-35f6-4a5f-ac8a-58747df158a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:21 crc kubenswrapper[4721]: I0128 19:34:21.295080 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp_6962dcfe-fe79-48fd-af49-7b4c644856d9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:21 crc kubenswrapper[4721]: I0128 19:34:21.349071 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fbxbh_2a9cb018-b8e2-4f14-b146-2ad0b8c6f997/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:21 crc kubenswrapper[4721]: I0128 19:34:21.501903 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-hsczp_547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:21 crc kubenswrapper[4721]: I0128 19:34:21.635870 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4647t_7481db6a-22d8-4e79-a0fc-8dc696d5d209/ssh-known-hosts-edpm-deployment/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.136888 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6895f7fb8c-vmmw7_078d9149-2986-4e6e-a8f4-c7535613a91d/proxy-httpd/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.370594 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6895f7fb8c-vmmw7_078d9149-2986-4e6e-a8f4-c7535613a91d/proxy-server/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.488983 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7bhzw_d06bcf83-999f-419a-9f4f-4e6544576897/swift-ring-rebalance/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.596390 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-auditor/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.678919 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-reaper/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.786340 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-replicator/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.840424 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-server/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.896302 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-auditor/0.log" Jan 28 19:34:22 crc kubenswrapper[4721]: I0128 19:34:22.990651 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-replicator/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.011968 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-server/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.142897 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-updater/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.172411 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-auditor/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.274721 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-expirer/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.357600 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-replicator/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.496460 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-server/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.530015 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:34:23 crc kubenswrapper[4721]: E0128 19:34:23.530485 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.578528 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-updater/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.725324 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/swift-recon-cron/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.743930 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/rsync/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.913455 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-28zzx_1e117cf9-a997-4596-9334-0edb394b7fed/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:23 crc kubenswrapper[4721]: I0128 19:34:23.977118 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5e586424-d1f9-4f72-9dc8-f046e2f235f5/tempest-tests-tempest-tests-runner/0.log" Jan 28 19:34:24 crc kubenswrapper[4721]: I0128 19:34:24.131328 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_512eb22d-5ddf-419c-aa72-60dea50ecc6d/test-operator-logs-container/0.log" Jan 28 19:34:24 crc kubenswrapper[4721]: I0128 19:34:24.393094 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl_e3cd0640-8d09-4743-8e9e-cc3914803f8c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:34:25 crc kubenswrapper[4721]: I0128 19:34:25.994177 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7be00819-ddfd-47d6-a7fc-430607636883/memcached/0.log" Jan 28 19:34:38 crc kubenswrapper[4721]: I0128 19:34:38.529216 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:34:38 crc kubenswrapper[4721]: E0128 19:34:38.530354 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:34:49 crc kubenswrapper[4721]: I0128 19:34:49.529836 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:34:49 crc kubenswrapper[4721]: E0128 19:34:49.530810 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.318684 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/util/0.log" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.526256 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/util/0.log" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.566354 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/pull/0.log" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.627847 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/pull/0.log" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.838542 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/extract/0.log" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.850425 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/util/0.log" Jan 28 19:34:54 crc kubenswrapper[4721]: I0128 19:34:54.857806 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/pull/0.log" Jan 28 19:34:55 crc kubenswrapper[4721]: I0128 19:34:55.122719 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-pv6ph_99e08199-2cc8-4f41-8310-f63c0a021a98/manager/0.log" Jan 28 19:34:55 crc kubenswrapper[4721]: I0128 19:34:55.186715 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-c9pmg_d258bf47-a441-49ad-a3ad-d5c04c615c9c/manager/0.log" Jan 28 19:34:55 crc kubenswrapper[4721]: I0128 19:34:55.449055 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-dbf9z_5f5dbe82-6a18-47da-98e6-00d10a32d1eb/manager/0.log" Jan 28 19:34:55 crc kubenswrapper[4721]: I0128 19:34:55.548864 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-7brt7_6e4d4bd0-d6ac-4268-bc08-86d74adfc33b/manager/0.log" Jan 28 19:34:55 crc kubenswrapper[4721]: I0128 19:34:55.733119 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-r46mm_6ec8e4f3-a711-43af-81da-91be5695e927/manager/0.log" Jan 28 19:34:55 crc kubenswrapper[4721]: I0128 19:34:55.795847 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-6m2fr_18c18118-f643-4590-9e07-87bffdb4195b/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.089514 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-wrzbl_7650ad3f-87f7-4c9a-b795-678ebc7edc7d/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.215878 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-fd75h_66d34dd5-6c67-40ec-8fc8-16320a5aef1d/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.291782 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6978b79747-vc75z_e8f6f9a2-7886-4896-baac-268e88869bb2/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.364597 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-mjxvn_835d5df3-4ea1-40ce-9bad-325396bfd41f/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.667146 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-pt757_f901f512-8af4-4e6c-abc8-0fd7d0f26ef3/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.704577 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-hv7r4_b102209d-5846-40f2-bb20-7022d18b9a28/manager/0.log" Jan 28 19:34:56 crc kubenswrapper[4721]: I0128 19:34:56.967591 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-ghpgf_8e4e395a-5b06-45ea-a2af-8a7a1180fc80/manager/0.log" Jan 28 19:34:57 crc kubenswrapper[4721]: I0128 19:34:57.029577 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5c765b4558-r996h_073e6433-4ca4-499a-8c82-0fda8211ecd3/manager/0.log" Jan 28 19:34:57 crc kubenswrapper[4721]: I0128 19:34:57.205068 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8_4bc4914a-125f-48f5-a7df-dbc170eaddd9/manager/0.log" Jan 28 19:34:57 crc kubenswrapper[4721]: I0128 19:34:57.436650 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-858cbdb9cd-v7bpd_d2642d34-9e91-460a-a889-42776f2201cc/operator/0.log" Jan 28 19:34:57 crc kubenswrapper[4721]: I0128 19:34:57.916403 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ckq4p_7e87d639-6eae-44a0-9005-9e5fb2b60b0c/registry-server/0.log" Jan 28 19:34:58 crc kubenswrapper[4721]: I0128 19:34:58.169156 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-js7f2_2cea4626-d7bc-4166-9c63-8aa4e6358bd3/manager/0.log" Jan 28 19:34:58 crc kubenswrapper[4721]: I0128 19:34:58.436031 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-gdb9m_9c28be52-26d0-4dd5-a3ca-ba3d9888dae8/manager/0.log" Jan 28 19:34:58 crc kubenswrapper[4721]: I0128 19:34:58.590562 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vprhw_a39fc394-2b18-4c7c-a780-0147ddb3a77a/operator/0.log" Jan 28 19:34:58 crc kubenswrapper[4721]: I0128 19:34:58.762223 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-798d8549d8-ztjwv_23d3546b-cba0-4c15-a8b0-de9cced9fdf8/manager/0.log" Jan 28 19:34:58 crc kubenswrapper[4721]: I0128 19:34:58.879689 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-9sqtl_021232bf-9e53-4907-80a0-702807db3f23/manager/0.log" Jan 28 19:34:59 crc kubenswrapper[4721]: I0128 19:34:59.175984 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-f56rw_066c13ce-1239-494e-bbc6-d175c62c501c/manager/0.log" Jan 28 19:34:59 crc kubenswrapper[4721]: I0128 19:34:59.412982 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-877d65859-2rn2n_83f4e7da-0144-44a8-886e-7f8c60f56014/manager/0.log" Jan 28 19:34:59 crc kubenswrapper[4721]: I0128 19:34:59.467082 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-tkgcv_b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6/manager/0.log" Jan 28 19:35:01 crc kubenswrapper[4721]: I0128 19:35:01.530434 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:35:01 crc kubenswrapper[4721]: E0128 19:35:01.531138 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:35:15 crc kubenswrapper[4721]: I0128 19:35:15.537928 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:35:15 crc kubenswrapper[4721]: E0128 19:35:15.540959 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:35:22 crc kubenswrapper[4721]: I0128 19:35:22.395084 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jrjx5_8355e616-674b-4bc2-a727-76609df63630/control-plane-machine-set-operator/0.log" Jan 28 19:35:22 crc kubenswrapper[4721]: I0128 19:35:22.730628 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g474w_49007c72-1df2-49db-9bbb-c90ee8207149/machine-api-operator/0.log" Jan 28 19:35:22 crc kubenswrapper[4721]: I0128 19:35:22.734053 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g474w_49007c72-1df2-49db-9bbb-c90ee8207149/kube-rbac-proxy/0.log" Jan 28 19:35:27 crc kubenswrapper[4721]: I0128 19:35:27.528962 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:35:27 crc kubenswrapper[4721]: E0128 19:35:27.529793 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:35:37 crc kubenswrapper[4721]: I0128 19:35:37.802008 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xxzt6_c68c41d8-39c1-417b-a4ba-dafeb3762c32/cert-manager-controller/0.log" Jan 28 19:35:38 crc kubenswrapper[4721]: I0128 19:35:38.081733 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l5dj9_12d309c4-9049-41c8-be1f-8f0e422ab186/cert-manager-webhook/0.log" Jan 28 19:35:38 crc kubenswrapper[4721]: I0128 19:35:38.102732 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f66kh_f637f152-a40b-45ff-989f-f82ad65b2066/cert-manager-cainjector/0.log" Jan 28 19:35:42 crc kubenswrapper[4721]: I0128 19:35:42.529496 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:35:42 crc kubenswrapper[4721]: E0128 19:35:42.530421 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:35:53 crc kubenswrapper[4721]: I0128 19:35:53.536411 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:35:53 crc kubenswrapper[4721]: E0128 19:35:53.537270 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:35:54 crc kubenswrapper[4721]: I0128 19:35:54.137067 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qxhd9_c7b54106-b20d-4911-a9e2-90d5539bb4d7/nmstate-console-plugin/0.log" Jan 28 19:35:54 crc kubenswrapper[4721]: I0128 19:35:54.400554 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4wqcf_cf95e16e-0533-4d53-a185-3c62adb9e573/nmstate-handler/0.log" Jan 28 19:35:54 crc kubenswrapper[4721]: I0128 19:35:54.462823 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rwjnr_fda999b5-6a00-4137-817e-b7d5417a2d2e/kube-rbac-proxy/0.log" Jan 28 19:35:54 crc kubenswrapper[4721]: I0128 19:35:54.656808 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rwjnr_fda999b5-6a00-4137-817e-b7d5417a2d2e/nmstate-metrics/0.log" Jan 28 19:35:54 crc kubenswrapper[4721]: I0128 19:35:54.693980 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-26llr_2498df5a-d126-45bd-b53b-9beeedc256b7/nmstate-operator/0.log" Jan 28 19:35:54 crc kubenswrapper[4721]: I0128 19:35:54.938696 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-9rp4b_a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b/nmstate-webhook/0.log" Jan 28 19:36:04 crc kubenswrapper[4721]: I0128 19:36:04.528881 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:36:04 crc kubenswrapper[4721]: E0128 19:36:04.529903 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:36:13 crc kubenswrapper[4721]: I0128 19:36:13.420671 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/kube-rbac-proxy/0.log" Jan 28 19:36:13 crc kubenswrapper[4721]: I0128 19:36:13.762642 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/manager/0.log" Jan 28 19:36:18 crc kubenswrapper[4721]: I0128 19:36:18.529820 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:36:18 crc kubenswrapper[4721]: E0128 19:36:18.530845 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.594625 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-648g5"] Jan 28 19:36:23 crc kubenswrapper[4721]: E0128 19:36:23.596847 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39045a0-f9cc-4182-8a65-6041b3fa4f25" containerName="container-00" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.597226 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39045a0-f9cc-4182-8a65-6041b3fa4f25" containerName="container-00" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.597507 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39045a0-f9cc-4182-8a65-6041b3fa4f25" containerName="container-00" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.599162 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.612390 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-648g5"] Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.727190 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-utilities\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.727559 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhx9c\" (UniqueName: \"kubernetes.io/projected/81da0240-bdf0-4b06-becd-8a8af1b648d5-kube-api-access-mhx9c\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.727909 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-catalog-content\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.830862 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-utilities\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.830946 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhx9c\" (UniqueName: \"kubernetes.io/projected/81da0240-bdf0-4b06-becd-8a8af1b648d5-kube-api-access-mhx9c\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.831139 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-catalog-content\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.831507 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-utilities\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.831621 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-catalog-content\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.870111 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhx9c\" (UniqueName: \"kubernetes.io/projected/81da0240-bdf0-4b06-becd-8a8af1b648d5-kube-api-access-mhx9c\") pod \"community-operators-648g5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:23 crc kubenswrapper[4721]: I0128 19:36:23.920674 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:24 crc kubenswrapper[4721]: I0128 19:36:24.568694 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-648g5"] Jan 28 19:36:24 crc kubenswrapper[4721]: W0128 19:36:24.886687 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81da0240_bdf0_4b06_becd_8a8af1b648d5.slice/crio-24ff9f7d5b80628352b4c3103c9cd7494f6f4c7f1aab2531e3856480ac08531d WatchSource:0}: Error finding container 24ff9f7d5b80628352b4c3103c9cd7494f6f4c7f1aab2531e3856480ac08531d: Status 404 returned error can't find the container with id 24ff9f7d5b80628352b4c3103c9cd7494f6f4c7f1aab2531e3856480ac08531d Jan 28 19:36:25 crc kubenswrapper[4721]: I0128 19:36:25.810539 4721 generic.go:334] "Generic (PLEG): container finished" podID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerID="5fe2e5db997aec0f75fdbd1936de67d8f21ac5e8db2364a3b59366bc89b5e712" exitCode=0 Jan 28 19:36:25 crc kubenswrapper[4721]: I0128 19:36:25.810663 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerDied","Data":"5fe2e5db997aec0f75fdbd1936de67d8f21ac5e8db2364a3b59366bc89b5e712"} Jan 28 19:36:25 crc kubenswrapper[4721]: I0128 19:36:25.810856 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerStarted","Data":"24ff9f7d5b80628352b4c3103c9cd7494f6f4c7f1aab2531e3856480ac08531d"} Jan 28 19:36:25 crc kubenswrapper[4721]: I0128 19:36:25.819506 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:36:27 crc kubenswrapper[4721]: I0128 19:36:27.833742 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerStarted","Data":"77b571de722c71ec4fc7f6251902a6b92a32ebac8820cd663959e28d13076680"} Jan 28 19:36:28 crc kubenswrapper[4721]: I0128 19:36:28.847572 4721 generic.go:334] "Generic (PLEG): container finished" podID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerID="77b571de722c71ec4fc7f6251902a6b92a32ebac8820cd663959e28d13076680" exitCode=0 Jan 28 19:36:28 crc kubenswrapper[4721]: I0128 19:36:28.847636 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerDied","Data":"77b571de722c71ec4fc7f6251902a6b92a32ebac8820cd663959e28d13076680"} Jan 28 19:36:29 crc kubenswrapper[4721]: I0128 19:36:29.538051 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:36:29 crc kubenswrapper[4721]: E0128 19:36:29.538974 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:36:29 crc kubenswrapper[4721]: I0128 19:36:29.862226 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerStarted","Data":"fba05c6b1d88924ebbd30e508458593eeb0d86d719e124e7309dac4ba1ac35b2"} Jan 28 19:36:32 crc kubenswrapper[4721]: I0128 19:36:32.248893 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-424xn_cd50289b-aa27-438d-89a2-405552dbadf7/prometheus-operator/0.log" Jan 28 19:36:32 crc kubenswrapper[4721]: I0128 19:36:32.531903 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_8b291a65-1dc7-4312-a429-60bb0a86800d/prometheus-operator-admission-webhook/0.log" Jan 28 19:36:32 crc kubenswrapper[4721]: I0128 19:36:32.634697 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_e3cb407f-4a19-4f81-b388-4db383b55701/prometheus-operator-admission-webhook/0.log" Jan 28 19:36:32 crc kubenswrapper[4721]: I0128 19:36:32.969057 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fqs7q_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117/perses-operator/0.log" Jan 28 19:36:33 crc kubenswrapper[4721]: I0128 19:36:33.037962 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-bdm2v_ab955356-2884-4e1b-9dfc-966a662c4095/operator/0.log" Jan 28 19:36:33 crc kubenswrapper[4721]: I0128 19:36:33.921201 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:33 crc kubenswrapper[4721]: I0128 19:36:33.921271 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:33 crc kubenswrapper[4721]: I0128 19:36:33.997632 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:34 crc kubenswrapper[4721]: I0128 19:36:34.026244 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-648g5" podStartSLOduration=7.570558462 podStartE2EDuration="11.026214916s" podCreationTimestamp="2026-01-28 19:36:23 +0000 UTC" firstStartedPulling="2026-01-28 19:36:25.814468419 +0000 UTC m=+3751.539773979" lastFinishedPulling="2026-01-28 19:36:29.270124873 +0000 UTC m=+3754.995430433" observedRunningTime="2026-01-28 19:36:29.889970626 +0000 UTC m=+3755.615276186" watchObservedRunningTime="2026-01-28 19:36:34.026214916 +0000 UTC m=+3759.751520476" Jan 28 19:36:34 crc kubenswrapper[4721]: I0128 19:36:34.966456 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:35 crc kubenswrapper[4721]: I0128 19:36:35.041745 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-648g5"] Jan 28 19:36:36 crc kubenswrapper[4721]: I0128 19:36:36.932639 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-648g5" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="registry-server" containerID="cri-o://fba05c6b1d88924ebbd30e508458593eeb0d86d719e124e7309dac4ba1ac35b2" gracePeriod=2 Jan 28 19:36:37 crc kubenswrapper[4721]: I0128 19:36:37.958601 4721 generic.go:334] "Generic (PLEG): container finished" podID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerID="fba05c6b1d88924ebbd30e508458593eeb0d86d719e124e7309dac4ba1ac35b2" exitCode=0 Jan 28 19:36:37 crc kubenswrapper[4721]: I0128 19:36:37.958640 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerDied","Data":"fba05c6b1d88924ebbd30e508458593eeb0d86d719e124e7309dac4ba1ac35b2"} Jan 28 19:36:37 crc kubenswrapper[4721]: I0128 19:36:37.959011 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-648g5" event={"ID":"81da0240-bdf0-4b06-becd-8a8af1b648d5","Type":"ContainerDied","Data":"24ff9f7d5b80628352b4c3103c9cd7494f6f4c7f1aab2531e3856480ac08531d"} Jan 28 19:36:37 crc kubenswrapper[4721]: I0128 19:36:37.959041 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ff9f7d5b80628352b4c3103c9cd7494f6f4c7f1aab2531e3856480ac08531d" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.447923 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.523723 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-utilities\") pod \"81da0240-bdf0-4b06-becd-8a8af1b648d5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.523770 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-catalog-content\") pod \"81da0240-bdf0-4b06-becd-8a8af1b648d5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.523793 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhx9c\" (UniqueName: \"kubernetes.io/projected/81da0240-bdf0-4b06-becd-8a8af1b648d5-kube-api-access-mhx9c\") pod \"81da0240-bdf0-4b06-becd-8a8af1b648d5\" (UID: \"81da0240-bdf0-4b06-becd-8a8af1b648d5\") " Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.525628 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-utilities" (OuterVolumeSpecName: "utilities") pod "81da0240-bdf0-4b06-becd-8a8af1b648d5" (UID: "81da0240-bdf0-4b06-becd-8a8af1b648d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.531966 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81da0240-bdf0-4b06-becd-8a8af1b648d5-kube-api-access-mhx9c" (OuterVolumeSpecName: "kube-api-access-mhx9c") pod "81da0240-bdf0-4b06-becd-8a8af1b648d5" (UID: "81da0240-bdf0-4b06-becd-8a8af1b648d5"). InnerVolumeSpecName "kube-api-access-mhx9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.587671 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81da0240-bdf0-4b06-becd-8a8af1b648d5" (UID: "81da0240-bdf0-4b06-becd-8a8af1b648d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.626966 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.627018 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81da0240-bdf0-4b06-becd-8a8af1b648d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.627035 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhx9c\" (UniqueName: \"kubernetes.io/projected/81da0240-bdf0-4b06-becd-8a8af1b648d5-kube-api-access-mhx9c\") on node \"crc\" DevicePath \"\"" Jan 28 19:36:38 crc kubenswrapper[4721]: I0128 19:36:38.968738 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-648g5" Jan 28 19:36:39 crc kubenswrapper[4721]: I0128 19:36:39.024653 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-648g5"] Jan 28 19:36:39 crc kubenswrapper[4721]: I0128 19:36:39.041730 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-648g5"] Jan 28 19:36:39 crc kubenswrapper[4721]: I0128 19:36:39.547949 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" path="/var/lib/kubelet/pods/81da0240-bdf0-4b06-becd-8a8af1b648d5/volumes" Jan 28 19:36:43 crc kubenswrapper[4721]: I0128 19:36:43.530192 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:36:43 crc kubenswrapper[4721]: E0128 19:36:43.532493 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.165395 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7rcs7_c251c48b-fe6b-484b-9ff7-60faab8d13b5/kube-rbac-proxy/0.log" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.306666 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7rcs7_c251c48b-fe6b-484b-9ff7-60faab8d13b5/controller/0.log" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.513475 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.768793 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.865886 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.877859 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:36:54 crc kubenswrapper[4721]: I0128 19:36:54.908451 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.167266 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.514604 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.537937 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.538043 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: E0128 19:36:55.538304 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.630024 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.863393 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.866251 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.947004 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:36:55 crc kubenswrapper[4721]: I0128 19:36:55.975891 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/controller/0.log" Jan 28 19:36:56 crc kubenswrapper[4721]: I0128 19:36:56.220359 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/frr-metrics/0.log" Jan 28 19:36:56 crc kubenswrapper[4721]: I0128 19:36:56.314971 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/kube-rbac-proxy-frr/0.log" Jan 28 19:36:56 crc kubenswrapper[4721]: I0128 19:36:56.343015 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/kube-rbac-proxy/0.log" Jan 28 19:36:56 crc kubenswrapper[4721]: I0128 19:36:56.526591 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/reloader/0.log" Jan 28 19:36:56 crc kubenswrapper[4721]: I0128 19:36:56.654855 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-9xvzd_514e6881-7399-4848-bb65-7851e1e3b079/frr-k8s-webhook-server/0.log" Jan 28 19:36:56 crc kubenswrapper[4721]: I0128 19:36:56.923276 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79d44b6d7b-q852t_fbde7afa-5af9-462b-b402-352513fb9655/manager/0.log" Jan 28 19:36:57 crc kubenswrapper[4721]: I0128 19:36:57.325347 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7689b8f645-b5mcc_690709f2-5507-45e6-8897-380890c19e6f/webhook-server/0.log" Jan 28 19:36:57 crc kubenswrapper[4721]: I0128 19:36:57.396397 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/frr/0.log" Jan 28 19:36:57 crc kubenswrapper[4721]: I0128 19:36:57.401241 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k5dbx_4d13a423-7c09-4fae-b239-e376e8487d85/kube-rbac-proxy/0.log" Jan 28 19:36:57 crc kubenswrapper[4721]: I0128 19:36:57.908388 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k5dbx_4d13a423-7c09-4fae-b239-e376e8487d85/speaker/0.log" Jan 28 19:37:07 crc kubenswrapper[4721]: I0128 19:37:07.528558 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:37:07 crc kubenswrapper[4721]: E0128 19:37:07.529365 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.691023 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/util/0.log" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.718148 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prqcr"] Jan 28 19:37:12 crc kubenswrapper[4721]: E0128 19:37:12.718715 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="extract-content" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.718735 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="extract-content" Jan 28 19:37:12 crc kubenswrapper[4721]: E0128 19:37:12.718754 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="registry-server" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.718762 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="registry-server" Jan 28 19:37:12 crc kubenswrapper[4721]: E0128 19:37:12.718802 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="extract-utilities" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.718809 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="extract-utilities" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.719043 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="81da0240-bdf0-4b06-becd-8a8af1b648d5" containerName="registry-server" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.720809 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.730489 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prqcr"] Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.902107 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.902184 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkx6t\" (UniqueName: \"kubernetes.io/projected/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-kube-api-access-tkx6t\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:12 crc kubenswrapper[4721]: I0128 19:37:12.902278 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-utilities\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.004767 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.004828 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkx6t\" (UniqueName: \"kubernetes.io/projected/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-kube-api-access-tkx6t\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.005003 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-utilities\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.005635 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-utilities\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.005928 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.032142 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkx6t\" (UniqueName: \"kubernetes.io/projected/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-kube-api-access-tkx6t\") pod \"certified-operators-prqcr\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.044698 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/pull/0.log" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.051827 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/util/0.log" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.058567 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.107270 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/pull/0.log" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.781834 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prqcr"] Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.842099 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/util/0.log" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.847405 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/extract/0.log" Jan 28 19:37:13 crc kubenswrapper[4721]: I0128 19:37:13.874498 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/pull/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.083530 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/util/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.249106 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/pull/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.258341 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/pull/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.320271 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/util/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.391155 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerDied","Data":"f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336"} Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.391013 4721 generic.go:334] "Generic (PLEG): container finished" podID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerID="f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336" exitCode=0 Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.392147 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerStarted","Data":"d14de63e82a3d85c567aa0c433235a94980109c958072e24035c9f04c6e956be"} Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.447272 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/util/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.468786 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/pull/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.496052 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/extract/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.707433 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/util/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.867658 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/pull/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.960874 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/pull/0.log" Jan 28 19:37:14 crc kubenswrapper[4721]: I0128 19:37:14.988998 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/util/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.159380 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/pull/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.202197 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/util/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.265783 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/extract/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.403924 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerStarted","Data":"0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174"} Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.451185 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/util/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.591271 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/util/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.627995 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/pull/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.674373 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/pull/0.log" Jan 28 19:37:15 crc kubenswrapper[4721]: I0128 19:37:15.892368 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/pull/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.003252 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/util/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.028360 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/extract/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.201981 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-utilities/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.478821 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-content/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.501961 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-utilities/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.551256 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-content/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.858080 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-utilities/0.log" Jan 28 19:37:16 crc kubenswrapper[4721]: I0128 19:37:16.881796 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-content/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.204253 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/registry-server/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.228573 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-utilities/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.375824 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-content/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.375872 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-utilities/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.437588 4721 generic.go:334] "Generic (PLEG): container finished" podID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerID="0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174" exitCode=0 Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.437643 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerDied","Data":"0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174"} Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.480300 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-content/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.640958 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-utilities/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.656624 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-content/0.log" Jan 28 19:37:17 crc kubenswrapper[4721]: I0128 19:37:17.788931 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dk9tw_c24ece18-1c22-49c3-ae82-e63bdc44ab1f/marketplace-operator/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.024593 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-utilities/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.239264 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-content/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.258553 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-utilities/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.282894 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-content/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.461364 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerStarted","Data":"9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3"} Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.499197 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prqcr" podStartSLOduration=3.036107269 podStartE2EDuration="6.499151513s" podCreationTimestamp="2026-01-28 19:37:12 +0000 UTC" firstStartedPulling="2026-01-28 19:37:14.392856702 +0000 UTC m=+3800.118162262" lastFinishedPulling="2026-01-28 19:37:17.855900946 +0000 UTC m=+3803.581206506" observedRunningTime="2026-01-28 19:37:18.484854465 +0000 UTC m=+3804.210160045" watchObservedRunningTime="2026-01-28 19:37:18.499151513 +0000 UTC m=+3804.224457083" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.550078 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/registry-server/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.731563 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-utilities/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.768337 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-content/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.853801 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-utilities/0.log" Jan 28 19:37:18 crc kubenswrapper[4721]: I0128 19:37:18.902727 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/registry-server/0.log" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.060114 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-utilities/0.log" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.138123 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-content/0.log" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.153713 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-content/0.log" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.432229 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-utilities/0.log" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.504936 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-content/0.log" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.528783 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:37:19 crc kubenswrapper[4721]: E0128 19:37:19.529032 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:37:19 crc kubenswrapper[4721]: I0128 19:37:19.872957 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/registry-server/0.log" Jan 28 19:37:23 crc kubenswrapper[4721]: I0128 19:37:23.058890 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:23 crc kubenswrapper[4721]: I0128 19:37:23.059478 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:24 crc kubenswrapper[4721]: I0128 19:37:24.117902 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prqcr" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="registry-server" probeResult="failure" output=< Jan 28 19:37:24 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:37:24 crc kubenswrapper[4721]: > Jan 28 19:37:32 crc kubenswrapper[4721]: I0128 19:37:32.528681 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:37:32 crc kubenswrapper[4721]: E0128 19:37:32.529511 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:37:33 crc kubenswrapper[4721]: I0128 19:37:33.115802 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:33 crc kubenswrapper[4721]: I0128 19:37:33.165980 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:33 crc kubenswrapper[4721]: I0128 19:37:33.364622 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prqcr"] Jan 28 19:37:34 crc kubenswrapper[4721]: I0128 19:37:34.639529 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prqcr" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="registry-server" containerID="cri-o://9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3" gracePeriod=2 Jan 28 19:37:34 crc kubenswrapper[4721]: I0128 19:37:34.804989 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_8b291a65-1dc7-4312-a429-60bb0a86800d/prometheus-operator-admission-webhook/0.log" Jan 28 19:37:34 crc kubenswrapper[4721]: I0128 19:37:34.885145 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-424xn_cd50289b-aa27-438d-89a2-405552dbadf7/prometheus-operator/0.log" Jan 28 19:37:34 crc kubenswrapper[4721]: I0128 19:37:34.889331 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_e3cb407f-4a19-4f81-b388-4db383b55701/prometheus-operator-admission-webhook/0.log" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.275143 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fqs7q_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117/perses-operator/0.log" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.377391 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-bdm2v_ab955356-2884-4e1b-9dfc-966a662c4095/operator/0.log" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.621745 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.655043 4721 generic.go:334] "Generic (PLEG): container finished" podID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerID="9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3" exitCode=0 Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.655105 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prqcr" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.655104 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerDied","Data":"9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3"} Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.655151 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prqcr" event={"ID":"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec","Type":"ContainerDied","Data":"d14de63e82a3d85c567aa0c433235a94980109c958072e24035c9f04c6e956be"} Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.655193 4721 scope.go:117] "RemoveContainer" containerID="9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.714506 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content\") pod \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.714637 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-utilities\") pod \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.714769 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkx6t\" (UniqueName: \"kubernetes.io/projected/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-kube-api-access-tkx6t\") pod \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.725456 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-utilities" (OuterVolumeSpecName: "utilities") pod "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" (UID: "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.732387 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-kube-api-access-tkx6t" (OuterVolumeSpecName: "kube-api-access-tkx6t") pod "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" (UID: "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec"). InnerVolumeSpecName "kube-api-access-tkx6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.774834 4721 scope.go:117] "RemoveContainer" containerID="0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.823010 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" (UID: "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.823294 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content\") pod \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\" (UID: \"b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec\") " Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.824139 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkx6t\" (UniqueName: \"kubernetes.io/projected/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-kube-api-access-tkx6t\") on node \"crc\" DevicePath \"\"" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.824179 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:37:35 crc kubenswrapper[4721]: W0128 19:37:35.824281 4721 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec/volumes/kubernetes.io~empty-dir/catalog-content Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.824297 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" (UID: "b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.882440 4721 scope.go:117] "RemoveContainer" containerID="f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.927707 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.959964 4721 scope.go:117] "RemoveContainer" containerID="9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3" Jan 28 19:37:35 crc kubenswrapper[4721]: E0128 19:37:35.970346 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3\": container with ID starting with 9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3 not found: ID does not exist" containerID="9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.970405 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3"} err="failed to get container status \"9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3\": rpc error: code = NotFound desc = could not find container \"9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3\": container with ID starting with 9cd9d39a57102091a87f9320118cfe36f74f8a4f13800d3022103f35c0b35eb3 not found: ID does not exist" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.970436 4721 scope.go:117] "RemoveContainer" containerID="0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174" Jan 28 19:37:35 crc kubenswrapper[4721]: E0128 19:37:35.973345 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174\": container with ID starting with 0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174 not found: ID does not exist" containerID="0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.973403 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174"} err="failed to get container status \"0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174\": rpc error: code = NotFound desc = could not find container \"0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174\": container with ID starting with 0b56d3d7b84a5797142b8a3f77cf5e64c4e74bad0a2f07f465a43e02a48c9174 not found: ID does not exist" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.973425 4721 scope.go:117] "RemoveContainer" containerID="f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336" Jan 28 19:37:35 crc kubenswrapper[4721]: E0128 19:37:35.979521 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336\": container with ID starting with f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336 not found: ID does not exist" containerID="f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336" Jan 28 19:37:35 crc kubenswrapper[4721]: I0128 19:37:35.979575 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336"} err="failed to get container status \"f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336\": rpc error: code = NotFound desc = could not find container \"f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336\": container with ID starting with f31094f3dba518f62e283141bb1cd0f13e158a21929d34b6cfd406fb4637c336 not found: ID does not exist" Jan 28 19:37:36 crc kubenswrapper[4721]: I0128 19:37:36.013227 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prqcr"] Jan 28 19:37:36 crc kubenswrapper[4721]: I0128 19:37:36.032423 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prqcr"] Jan 28 19:37:37 crc kubenswrapper[4721]: I0128 19:37:37.541496 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" path="/var/lib/kubelet/pods/b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec/volumes" Jan 28 19:37:45 crc kubenswrapper[4721]: I0128 19:37:45.536859 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:37:45 crc kubenswrapper[4721]: E0128 19:37:45.537805 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:37:54 crc kubenswrapper[4721]: I0128 19:37:54.219145 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/kube-rbac-proxy/0.log" Jan 28 19:37:54 crc kubenswrapper[4721]: I0128 19:37:54.274879 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/manager/0.log" Jan 28 19:37:59 crc kubenswrapper[4721]: I0128 19:37:59.529661 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:37:59 crc kubenswrapper[4721]: E0128 19:37:59.530686 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:38:12 crc kubenswrapper[4721]: I0128 19:38:12.529641 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:38:12 crc kubenswrapper[4721]: E0128 19:38:12.530353 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:38:21 crc kubenswrapper[4721]: I0128 19:38:21.727767 4721 scope.go:117] "RemoveContainer" containerID="9567f8bbf8b9036622a7b779077a53cc1189396a13b4583b2f673046ab01f190" Jan 28 19:38:23 crc kubenswrapper[4721]: I0128 19:38:23.529541 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:38:23 crc kubenswrapper[4721]: E0128 19:38:23.530206 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:38:36 crc kubenswrapper[4721]: I0128 19:38:36.529421 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:38:37 crc kubenswrapper[4721]: I0128 19:38:37.308702 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"6fc19c281b122451304f84c70331e9669501c7cd9edf67a7e62e191bd85b6357"} Jan 28 19:39:29 crc kubenswrapper[4721]: I0128 19:39:29.710454 4721 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6895f7fb8c-vmmw7" podUID="078d9149-2986-4e6e-a8f4-c7535613a91d" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 19:40:04 crc kubenswrapper[4721]: I0128 19:40:04.343111 4721 generic.go:334] "Generic (PLEG): container finished" podID="a65ab672-e06c-477e-9826-b343a80c16bc" containerID="2d866b390d1b76dec9f4af78eedd58efb6770e34087c692d287a591c60e21133" exitCode=0 Jan 28 19:40:04 crc kubenswrapper[4721]: I0128 19:40:04.343226 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldp74/must-gather-lpvjh" event={"ID":"a65ab672-e06c-477e-9826-b343a80c16bc","Type":"ContainerDied","Data":"2d866b390d1b76dec9f4af78eedd58efb6770e34087c692d287a591c60e21133"} Jan 28 19:40:04 crc kubenswrapper[4721]: I0128 19:40:04.344707 4721 scope.go:117] "RemoveContainer" containerID="2d866b390d1b76dec9f4af78eedd58efb6770e34087c692d287a591c60e21133" Jan 28 19:40:05 crc kubenswrapper[4721]: I0128 19:40:05.363070 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ldp74_must-gather-lpvjh_a65ab672-e06c-477e-9826-b343a80c16bc/gather/0.log" Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.098234 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ldp74/must-gather-lpvjh"] Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.099102 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ldp74/must-gather-lpvjh" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="copy" containerID="cri-o://ee59c4b5bb8c4395de5b82d66551c1049b451144900416f5944c4deaae21eef3" gracePeriod=2 Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.125627 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ldp74/must-gather-lpvjh"] Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.450511 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ldp74_must-gather-lpvjh_a65ab672-e06c-477e-9826-b343a80c16bc/copy/0.log" Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.450889 4721 generic.go:334] "Generic (PLEG): container finished" podID="a65ab672-e06c-477e-9826-b343a80c16bc" containerID="ee59c4b5bb8c4395de5b82d66551c1049b451144900416f5944c4deaae21eef3" exitCode=143 Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.951226 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ldp74_must-gather-lpvjh_a65ab672-e06c-477e-9826-b343a80c16bc/copy/0.log" Jan 28 19:40:14 crc kubenswrapper[4721]: I0128 19:40:14.952022 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.044933 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a65ab672-e06c-477e-9826-b343a80c16bc-must-gather-output\") pod \"a65ab672-e06c-477e-9826-b343a80c16bc\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.045058 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6lh\" (UniqueName: \"kubernetes.io/projected/a65ab672-e06c-477e-9826-b343a80c16bc-kube-api-access-cq6lh\") pod \"a65ab672-e06c-477e-9826-b343a80c16bc\" (UID: \"a65ab672-e06c-477e-9826-b343a80c16bc\") " Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.053313 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65ab672-e06c-477e-9826-b343a80c16bc-kube-api-access-cq6lh" (OuterVolumeSpecName: "kube-api-access-cq6lh") pod "a65ab672-e06c-477e-9826-b343a80c16bc" (UID: "a65ab672-e06c-477e-9826-b343a80c16bc"). InnerVolumeSpecName "kube-api-access-cq6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.148786 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6lh\" (UniqueName: \"kubernetes.io/projected/a65ab672-e06c-477e-9826-b343a80c16bc-kube-api-access-cq6lh\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.282137 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65ab672-e06c-477e-9826-b343a80c16bc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a65ab672-e06c-477e-9826-b343a80c16bc" (UID: "a65ab672-e06c-477e-9826-b343a80c16bc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.355130 4721 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a65ab672-e06c-477e-9826-b343a80c16bc-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.466699 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ldp74_must-gather-lpvjh_a65ab672-e06c-477e-9826-b343a80c16bc/copy/0.log" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.468654 4721 scope.go:117] "RemoveContainer" containerID="ee59c4b5bb8c4395de5b82d66551c1049b451144900416f5944c4deaae21eef3" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.468956 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldp74/must-gather-lpvjh" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.529964 4721 scope.go:117] "RemoveContainer" containerID="2d866b390d1b76dec9f4af78eedd58efb6770e34087c692d287a591c60e21133" Jan 28 19:40:15 crc kubenswrapper[4721]: I0128 19:40:15.551012 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" path="/var/lib/kubelet/pods/a65ab672-e06c-477e-9826-b343a80c16bc/volumes" Jan 28 19:40:21 crc kubenswrapper[4721]: I0128 19:40:21.899644 4721 scope.go:117] "RemoveContainer" containerID="356fb23de40512c1217a70147972dd7b68667df4af3816691da87c9f8e56e99f" Jan 28 19:41:01 crc kubenswrapper[4721]: I0128 19:41:01.224991 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:41:01 crc kubenswrapper[4721]: I0128 19:41:01.225706 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.620429 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-62dvj"] Jan 28 19:41:29 crc kubenswrapper[4721]: E0128 19:41:29.621632 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="gather" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621650 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="gather" Jan 28 19:41:29 crc kubenswrapper[4721]: E0128 19:41:29.621690 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="copy" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621697 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="copy" Jan 28 19:41:29 crc kubenswrapper[4721]: E0128 19:41:29.621710 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="registry-server" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621718 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="registry-server" Jan 28 19:41:29 crc kubenswrapper[4721]: E0128 19:41:29.621729 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="extract-content" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621734 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="extract-content" Jan 28 19:41:29 crc kubenswrapper[4721]: E0128 19:41:29.621748 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="extract-utilities" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621753 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="extract-utilities" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621953 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ba602a-f6ca-4fb6-a1cc-7e5c90ff00ec" containerName="registry-server" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.621988 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="gather" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.622000 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65ab672-e06c-477e-9826-b343a80c16bc" containerName="copy" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.625177 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.635609 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62dvj"] Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.802105 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7x2\" (UniqueName: \"kubernetes.io/projected/cf62c706-aaf8-4826-b2ab-30f3a943f546-kube-api-access-2l7x2\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.802200 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-utilities\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.803130 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-catalog-content\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.905955 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-catalog-content\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.906229 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l7x2\" (UniqueName: \"kubernetes.io/projected/cf62c706-aaf8-4826-b2ab-30f3a943f546-kube-api-access-2l7x2\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.906271 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-utilities\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.906844 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-catalog-content\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.906936 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-utilities\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.934183 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l7x2\" (UniqueName: \"kubernetes.io/projected/cf62c706-aaf8-4826-b2ab-30f3a943f546-kube-api-access-2l7x2\") pod \"redhat-operators-62dvj\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:29 crc kubenswrapper[4721]: I0128 19:41:29.949353 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:30 crc kubenswrapper[4721]: I0128 19:41:30.505066 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62dvj"] Jan 28 19:41:31 crc kubenswrapper[4721]: I0128 19:41:31.225118 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:41:31 crc kubenswrapper[4721]: I0128 19:41:31.225505 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:41:31 crc kubenswrapper[4721]: I0128 19:41:31.261003 4721 generic.go:334] "Generic (PLEG): container finished" podID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerID="d3a0c10dee60430384fe4b22600d3d4f3ed94ac20f3429d789b65343174fb34d" exitCode=0 Jan 28 19:41:31 crc kubenswrapper[4721]: I0128 19:41:31.261063 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerDied","Data":"d3a0c10dee60430384fe4b22600d3d4f3ed94ac20f3429d789b65343174fb34d"} Jan 28 19:41:31 crc kubenswrapper[4721]: I0128 19:41:31.261099 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerStarted","Data":"52e4e11fc6fff911a1a1ecbc7a79d62b165b15fed384de23642be90455fb400e"} Jan 28 19:41:31 crc kubenswrapper[4721]: I0128 19:41:31.264647 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:41:33 crc kubenswrapper[4721]: I0128 19:41:33.285107 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerStarted","Data":"b786bd991066002c338ff8997f86b690d5a583ef37f732e12a1172f3041a86ba"} Jan 28 19:41:38 crc kubenswrapper[4721]: I0128 19:41:38.337688 4721 generic.go:334] "Generic (PLEG): container finished" podID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerID="b786bd991066002c338ff8997f86b690d5a583ef37f732e12a1172f3041a86ba" exitCode=0 Jan 28 19:41:38 crc kubenswrapper[4721]: I0128 19:41:38.337782 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerDied","Data":"b786bd991066002c338ff8997f86b690d5a583ef37f732e12a1172f3041a86ba"} Jan 28 19:41:39 crc kubenswrapper[4721]: I0128 19:41:39.355901 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerStarted","Data":"84b3f70c4fafb8abb3eaebbb69e2272b5106530d7dc8b9cf19c88d35ebbf58a6"} Jan 28 19:41:39 crc kubenswrapper[4721]: I0128 19:41:39.421654 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-62dvj" podStartSLOduration=2.620918011 podStartE2EDuration="10.421623467s" podCreationTimestamp="2026-01-28 19:41:29 +0000 UTC" firstStartedPulling="2026-01-28 19:41:31.264383478 +0000 UTC m=+4056.989689038" lastFinishedPulling="2026-01-28 19:41:39.065088934 +0000 UTC m=+4064.790394494" observedRunningTime="2026-01-28 19:41:39.397438764 +0000 UTC m=+4065.122744324" watchObservedRunningTime="2026-01-28 19:41:39.421623467 +0000 UTC m=+4065.146929027" Jan 28 19:41:39 crc kubenswrapper[4721]: I0128 19:41:39.950379 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:39 crc kubenswrapper[4721]: I0128 19:41:39.950491 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:41 crc kubenswrapper[4721]: I0128 19:41:41.014681 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-62dvj" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="registry-server" probeResult="failure" output=< Jan 28 19:41:41 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:41:41 crc kubenswrapper[4721]: > Jan 28 19:41:49 crc kubenswrapper[4721]: I0128 19:41:49.996795 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:50 crc kubenswrapper[4721]: I0128 19:41:50.048358 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:51 crc kubenswrapper[4721]: I0128 19:41:51.793702 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62dvj"] Jan 28 19:41:51 crc kubenswrapper[4721]: I0128 19:41:51.794329 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-62dvj" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="registry-server" containerID="cri-o://84b3f70c4fafb8abb3eaebbb69e2272b5106530d7dc8b9cf19c88d35ebbf58a6" gracePeriod=2 Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.486259 4721 generic.go:334] "Generic (PLEG): container finished" podID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerID="84b3f70c4fafb8abb3eaebbb69e2272b5106530d7dc8b9cf19c88d35ebbf58a6" exitCode=0 Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.486357 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerDied","Data":"84b3f70c4fafb8abb3eaebbb69e2272b5106530d7dc8b9cf19c88d35ebbf58a6"} Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.653668 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.747845 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-catalog-content\") pod \"cf62c706-aaf8-4826-b2ab-30f3a943f546\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.748076 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l7x2\" (UniqueName: \"kubernetes.io/projected/cf62c706-aaf8-4826-b2ab-30f3a943f546-kube-api-access-2l7x2\") pod \"cf62c706-aaf8-4826-b2ab-30f3a943f546\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.748186 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-utilities\") pod \"cf62c706-aaf8-4826-b2ab-30f3a943f546\" (UID: \"cf62c706-aaf8-4826-b2ab-30f3a943f546\") " Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.749571 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-utilities" (OuterVolumeSpecName: "utilities") pod "cf62c706-aaf8-4826-b2ab-30f3a943f546" (UID: "cf62c706-aaf8-4826-b2ab-30f3a943f546"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.760932 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf62c706-aaf8-4826-b2ab-30f3a943f546-kube-api-access-2l7x2" (OuterVolumeSpecName: "kube-api-access-2l7x2") pod "cf62c706-aaf8-4826-b2ab-30f3a943f546" (UID: "cf62c706-aaf8-4826-b2ab-30f3a943f546"). InnerVolumeSpecName "kube-api-access-2l7x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.851243 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l7x2\" (UniqueName: \"kubernetes.io/projected/cf62c706-aaf8-4826-b2ab-30f3a943f546-kube-api-access-2l7x2\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.851281 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.860710 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf62c706-aaf8-4826-b2ab-30f3a943f546" (UID: "cf62c706-aaf8-4826-b2ab-30f3a943f546"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:41:52 crc kubenswrapper[4721]: I0128 19:41:52.952959 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf62c706-aaf8-4826-b2ab-30f3a943f546-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.499864 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62dvj" event={"ID":"cf62c706-aaf8-4826-b2ab-30f3a943f546","Type":"ContainerDied","Data":"52e4e11fc6fff911a1a1ecbc7a79d62b165b15fed384de23642be90455fb400e"} Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.499941 4721 scope.go:117] "RemoveContainer" containerID="84b3f70c4fafb8abb3eaebbb69e2272b5106530d7dc8b9cf19c88d35ebbf58a6" Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.500014 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62dvj" Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.521012 4721 scope.go:117] "RemoveContainer" containerID="b786bd991066002c338ff8997f86b690d5a583ef37f732e12a1172f3041a86ba" Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.547318 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62dvj"] Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.550043 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-62dvj"] Jan 28 19:41:53 crc kubenswrapper[4721]: I0128 19:41:53.691315 4721 scope.go:117] "RemoveContainer" containerID="d3a0c10dee60430384fe4b22600d3d4f3ed94ac20f3429d789b65343174fb34d" Jan 28 19:41:55 crc kubenswrapper[4721]: I0128 19:41:55.566984 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" path="/var/lib/kubelet/pods/cf62c706-aaf8-4826-b2ab-30f3a943f546/volumes" Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.224745 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.225291 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.225332 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.226029 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fc19c281b122451304f84c70331e9669501c7cd9edf67a7e62e191bd85b6357"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.226077 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://6fc19c281b122451304f84c70331e9669501c7cd9edf67a7e62e191bd85b6357" gracePeriod=600 Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.580962 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="6fc19c281b122451304f84c70331e9669501c7cd9edf67a7e62e191bd85b6357" exitCode=0 Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.581577 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"6fc19c281b122451304f84c70331e9669501c7cd9edf67a7e62e191bd85b6357"} Jan 28 19:42:01 crc kubenswrapper[4721]: I0128 19:42:01.581712 4721 scope.go:117] "RemoveContainer" containerID="cbcc37a94abf33c6e4e5c18dd7dea7199e37b457d8f91773d59e3a703a8cd690" Jan 28 19:42:02 crc kubenswrapper[4721]: I0128 19:42:02.592294 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97"} Jan 28 19:43:22 crc kubenswrapper[4721]: I0128 19:43:22.345834 4721 scope.go:117] "RemoveContainer" containerID="5fe2e5db997aec0f75fdbd1936de67d8f21ac5e8db2364a3b59366bc89b5e712" Jan 28 19:43:22 crc kubenswrapper[4721]: I0128 19:43:22.370959 4721 scope.go:117] "RemoveContainer" containerID="77b571de722c71ec4fc7f6251902a6b92a32ebac8820cd663959e28d13076680" Jan 28 19:43:22 crc kubenswrapper[4721]: I0128 19:43:22.433590 4721 scope.go:117] "RemoveContainer" containerID="fba05c6b1d88924ebbd30e508458593eeb0d86d719e124e7309dac4ba1ac35b2" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.203143 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v5bj4/must-gather-cb9xb"] Jan 28 19:43:40 crc kubenswrapper[4721]: E0128 19:43:40.204430 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="extract-content" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.204452 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="extract-content" Jan 28 19:43:40 crc kubenswrapper[4721]: E0128 19:43:40.204483 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="registry-server" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.204491 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="registry-server" Jan 28 19:43:40 crc kubenswrapper[4721]: E0128 19:43:40.204507 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="extract-utilities" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.204518 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="extract-utilities" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.204836 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf62c706-aaf8-4826-b2ab-30f3a943f546" containerName="registry-server" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.206362 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.215846 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v5bj4"/"openshift-service-ca.crt" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.216129 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v5bj4"/"default-dockercfg-pjcxp" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.216360 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v5bj4"/"kube-root-ca.crt" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.257290 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v5bj4/must-gather-cb9xb"] Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.308833 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffa932a-996d-42ca-8f63-54e570ca5410-must-gather-output\") pod \"must-gather-cb9xb\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.308919 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5m5\" (UniqueName: \"kubernetes.io/projected/cffa932a-996d-42ca-8f63-54e570ca5410-kube-api-access-bl5m5\") pod \"must-gather-cb9xb\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.411110 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffa932a-996d-42ca-8f63-54e570ca5410-must-gather-output\") pod \"must-gather-cb9xb\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.411233 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5m5\" (UniqueName: \"kubernetes.io/projected/cffa932a-996d-42ca-8f63-54e570ca5410-kube-api-access-bl5m5\") pod \"must-gather-cb9xb\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.411641 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffa932a-996d-42ca-8f63-54e570ca5410-must-gather-output\") pod \"must-gather-cb9xb\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.452415 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5m5\" (UniqueName: \"kubernetes.io/projected/cffa932a-996d-42ca-8f63-54e570ca5410-kube-api-access-bl5m5\") pod \"must-gather-cb9xb\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:40 crc kubenswrapper[4721]: I0128 19:43:40.553593 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:43:41 crc kubenswrapper[4721]: I0128 19:43:41.199845 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v5bj4/must-gather-cb9xb"] Jan 28 19:43:41 crc kubenswrapper[4721]: I0128 19:43:41.734091 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" event={"ID":"cffa932a-996d-42ca-8f63-54e570ca5410","Type":"ContainerStarted","Data":"a60a204575db6c284186dcda04f19157b335532310754011a270c45a65ec1db8"} Jan 28 19:43:41 crc kubenswrapper[4721]: I0128 19:43:41.734737 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" event={"ID":"cffa932a-996d-42ca-8f63-54e570ca5410","Type":"ContainerStarted","Data":"8ef907124095e9fe78a3d65cfd3c1f66ef5506c6934bb40e9a92d5291b30271b"} Jan 28 19:43:42 crc kubenswrapper[4721]: I0128 19:43:42.748217 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" event={"ID":"cffa932a-996d-42ca-8f63-54e570ca5410","Type":"ContainerStarted","Data":"440d57d2d6c173e59ef541ac66425624886f55defa73229a19a6369c6f97650b"} Jan 28 19:43:42 crc kubenswrapper[4721]: I0128 19:43:42.771874 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" podStartSLOduration=2.77184274 podStartE2EDuration="2.77184274s" podCreationTimestamp="2026-01-28 19:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:43:42.760776623 +0000 UTC m=+4188.486082193" watchObservedRunningTime="2026-01-28 19:43:42.77184274 +0000 UTC m=+4188.497148300" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.619866 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-rcmnr"] Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.622556 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.702754 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1670edd1-98ca-42c3-843c-9da12aa74c27-host\") pod \"crc-debug-rcmnr\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.703065 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfw8\" (UniqueName: \"kubernetes.io/projected/1670edd1-98ca-42c3-843c-9da12aa74c27-kube-api-access-qqfw8\") pod \"crc-debug-rcmnr\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.805539 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqfw8\" (UniqueName: \"kubernetes.io/projected/1670edd1-98ca-42c3-843c-9da12aa74c27-kube-api-access-qqfw8\") pod \"crc-debug-rcmnr\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.805706 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1670edd1-98ca-42c3-843c-9da12aa74c27-host\") pod \"crc-debug-rcmnr\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.805825 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1670edd1-98ca-42c3-843c-9da12aa74c27-host\") pod \"crc-debug-rcmnr\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.827645 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqfw8\" (UniqueName: \"kubernetes.io/projected/1670edd1-98ca-42c3-843c-9da12aa74c27-kube-api-access-qqfw8\") pod \"crc-debug-rcmnr\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:47 crc kubenswrapper[4721]: I0128 19:43:47.944649 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:43:48 crc kubenswrapper[4721]: I0128 19:43:48.834720 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" event={"ID":"1670edd1-98ca-42c3-843c-9da12aa74c27","Type":"ContainerStarted","Data":"99eccd6daae0c40ef7f6a930d6ca38b6eb0370ade1af6e062ecff979e4629691"} Jan 28 19:43:48 crc kubenswrapper[4721]: I0128 19:43:48.835410 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" event={"ID":"1670edd1-98ca-42c3-843c-9da12aa74c27","Type":"ContainerStarted","Data":"2b02e0b3330c98b5044c498fc707f72a4fa6b428e925fb8e548fa6d497abae73"} Jan 28 19:43:48 crc kubenswrapper[4721]: I0128 19:43:48.857358 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" podStartSLOduration=1.857308267 podStartE2EDuration="1.857308267s" podCreationTimestamp="2026-01-28 19:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:43:48.851820615 +0000 UTC m=+4194.577126185" watchObservedRunningTime="2026-01-28 19:43:48.857308267 +0000 UTC m=+4194.582613847" Jan 28 19:44:01 crc kubenswrapper[4721]: I0128 19:44:01.224877 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:44:01 crc kubenswrapper[4721]: I0128 19:44:01.225588 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:44:31 crc kubenswrapper[4721]: I0128 19:44:31.225104 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:44:31 crc kubenswrapper[4721]: I0128 19:44:31.225787 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:44:50 crc kubenswrapper[4721]: I0128 19:44:50.524503 4721 generic.go:334] "Generic (PLEG): container finished" podID="1670edd1-98ca-42c3-843c-9da12aa74c27" containerID="99eccd6daae0c40ef7f6a930d6ca38b6eb0370ade1af6e062ecff979e4629691" exitCode=0 Jan 28 19:44:50 crc kubenswrapper[4721]: I0128 19:44:50.524589 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" event={"ID":"1670edd1-98ca-42c3-843c-9da12aa74c27","Type":"ContainerDied","Data":"99eccd6daae0c40ef7f6a930d6ca38b6eb0370ade1af6e062ecff979e4629691"} Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.678668 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.723375 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-rcmnr"] Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.735329 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-rcmnr"] Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.847620 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1670edd1-98ca-42c3-843c-9da12aa74c27-host\") pod \"1670edd1-98ca-42c3-843c-9da12aa74c27\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.847980 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqfw8\" (UniqueName: \"kubernetes.io/projected/1670edd1-98ca-42c3-843c-9da12aa74c27-kube-api-access-qqfw8\") pod \"1670edd1-98ca-42c3-843c-9da12aa74c27\" (UID: \"1670edd1-98ca-42c3-843c-9da12aa74c27\") " Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.847784 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1670edd1-98ca-42c3-843c-9da12aa74c27-host" (OuterVolumeSpecName: "host") pod "1670edd1-98ca-42c3-843c-9da12aa74c27" (UID: "1670edd1-98ca-42c3-843c-9da12aa74c27"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.848926 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1670edd1-98ca-42c3-843c-9da12aa74c27-host\") on node \"crc\" DevicePath \"\"" Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.864338 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1670edd1-98ca-42c3-843c-9da12aa74c27-kube-api-access-qqfw8" (OuterVolumeSpecName: "kube-api-access-qqfw8") pod "1670edd1-98ca-42c3-843c-9da12aa74c27" (UID: "1670edd1-98ca-42c3-843c-9da12aa74c27"). InnerVolumeSpecName "kube-api-access-qqfw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:44:51 crc kubenswrapper[4721]: I0128 19:44:51.957873 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqfw8\" (UniqueName: \"kubernetes.io/projected/1670edd1-98ca-42c3-843c-9da12aa74c27-kube-api-access-qqfw8\") on node \"crc\" DevicePath \"\"" Jan 28 19:44:52 crc kubenswrapper[4721]: I0128 19:44:52.554922 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b02e0b3330c98b5044c498fc707f72a4fa6b428e925fb8e548fa6d497abae73" Jan 28 19:44:52 crc kubenswrapper[4721]: I0128 19:44:52.555014 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-rcmnr" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.008604 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-pclzd"] Jan 28 19:44:53 crc kubenswrapper[4721]: E0128 19:44:53.009210 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1670edd1-98ca-42c3-843c-9da12aa74c27" containerName="container-00" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.009230 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="1670edd1-98ca-42c3-843c-9da12aa74c27" containerName="container-00" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.009674 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="1670edd1-98ca-42c3-843c-9da12aa74c27" containerName="container-00" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.010737 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.185293 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbzh\" (UniqueName: \"kubernetes.io/projected/bef8cf65-2d43-4339-b38a-2b523b8a182f-kube-api-access-hbbzh\") pod \"crc-debug-pclzd\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.185340 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bef8cf65-2d43-4339-b38a-2b523b8a182f-host\") pod \"crc-debug-pclzd\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.288142 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbzh\" (UniqueName: \"kubernetes.io/projected/bef8cf65-2d43-4339-b38a-2b523b8a182f-kube-api-access-hbbzh\") pod \"crc-debug-pclzd\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.288224 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bef8cf65-2d43-4339-b38a-2b523b8a182f-host\") pod \"crc-debug-pclzd\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.288448 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bef8cf65-2d43-4339-b38a-2b523b8a182f-host\") pod \"crc-debug-pclzd\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.321876 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbzh\" (UniqueName: \"kubernetes.io/projected/bef8cf65-2d43-4339-b38a-2b523b8a182f-kube-api-access-hbbzh\") pod \"crc-debug-pclzd\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.333974 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.541386 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1670edd1-98ca-42c3-843c-9da12aa74c27" path="/var/lib/kubelet/pods/1670edd1-98ca-42c3-843c-9da12aa74c27/volumes" Jan 28 19:44:53 crc kubenswrapper[4721]: I0128 19:44:53.566080 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" event={"ID":"bef8cf65-2d43-4339-b38a-2b523b8a182f","Type":"ContainerStarted","Data":"a5945c778debedc02876bd1e4b0d824beb1953f0e141802fbb68b5d9fe9bd8d4"} Jan 28 19:44:54 crc kubenswrapper[4721]: I0128 19:44:54.578493 4721 generic.go:334] "Generic (PLEG): container finished" podID="bef8cf65-2d43-4339-b38a-2b523b8a182f" containerID="460ab02faf9bbfba1bdcd77781963e9e460b3fd00435e7802459e469e5c85df2" exitCode=0 Jan 28 19:44:54 crc kubenswrapper[4721]: I0128 19:44:54.578572 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" event={"ID":"bef8cf65-2d43-4339-b38a-2b523b8a182f","Type":"ContainerDied","Data":"460ab02faf9bbfba1bdcd77781963e9e460b3fd00435e7802459e469e5c85df2"} Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.741328 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.854898 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbbzh\" (UniqueName: \"kubernetes.io/projected/bef8cf65-2d43-4339-b38a-2b523b8a182f-kube-api-access-hbbzh\") pod \"bef8cf65-2d43-4339-b38a-2b523b8a182f\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.855118 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bef8cf65-2d43-4339-b38a-2b523b8a182f-host\") pod \"bef8cf65-2d43-4339-b38a-2b523b8a182f\" (UID: \"bef8cf65-2d43-4339-b38a-2b523b8a182f\") " Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.855763 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bef8cf65-2d43-4339-b38a-2b523b8a182f-host" (OuterVolumeSpecName: "host") pod "bef8cf65-2d43-4339-b38a-2b523b8a182f" (UID: "bef8cf65-2d43-4339-b38a-2b523b8a182f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.864454 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef8cf65-2d43-4339-b38a-2b523b8a182f-kube-api-access-hbbzh" (OuterVolumeSpecName: "kube-api-access-hbbzh") pod "bef8cf65-2d43-4339-b38a-2b523b8a182f" (UID: "bef8cf65-2d43-4339-b38a-2b523b8a182f"). InnerVolumeSpecName "kube-api-access-hbbzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.957500 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bef8cf65-2d43-4339-b38a-2b523b8a182f-host\") on node \"crc\" DevicePath \"\"" Jan 28 19:44:55 crc kubenswrapper[4721]: I0128 19:44:55.957537 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbbzh\" (UniqueName: \"kubernetes.io/projected/bef8cf65-2d43-4339-b38a-2b523b8a182f-kube-api-access-hbbzh\") on node \"crc\" DevicePath \"\"" Jan 28 19:44:56 crc kubenswrapper[4721]: I0128 19:44:56.608234 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" event={"ID":"bef8cf65-2d43-4339-b38a-2b523b8a182f","Type":"ContainerDied","Data":"a5945c778debedc02876bd1e4b0d824beb1953f0e141802fbb68b5d9fe9bd8d4"} Jan 28 19:44:56 crc kubenswrapper[4721]: I0128 19:44:56.608290 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5945c778debedc02876bd1e4b0d824beb1953f0e141802fbb68b5d9fe9bd8d4" Jan 28 19:44:56 crc kubenswrapper[4721]: I0128 19:44:56.608367 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-pclzd" Jan 28 19:44:57 crc kubenswrapper[4721]: I0128 19:44:57.174830 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-pclzd"] Jan 28 19:44:57 crc kubenswrapper[4721]: I0128 19:44:57.193303 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-pclzd"] Jan 28 19:44:57 crc kubenswrapper[4721]: I0128 19:44:57.541714 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef8cf65-2d43-4339-b38a-2b523b8a182f" path="/var/lib/kubelet/pods/bef8cf65-2d43-4339-b38a-2b523b8a182f/volumes" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.299064 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-gstb8"] Jan 28 19:44:59 crc kubenswrapper[4721]: E0128 19:44:59.299892 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef8cf65-2d43-4339-b38a-2b523b8a182f" containerName="container-00" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.299910 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef8cf65-2d43-4339-b38a-2b523b8a182f" containerName="container-00" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.300228 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef8cf65-2d43-4339-b38a-2b523b8a182f" containerName="container-00" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.301288 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.441525 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b502569-ba43-46c0-95a5-aace66c7cdd0-host\") pod \"crc-debug-gstb8\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.441596 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chsvg\" (UniqueName: \"kubernetes.io/projected/4b502569-ba43-46c0-95a5-aace66c7cdd0-kube-api-access-chsvg\") pod \"crc-debug-gstb8\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.543772 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b502569-ba43-46c0-95a5-aace66c7cdd0-host\") pod \"crc-debug-gstb8\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.543848 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chsvg\" (UniqueName: \"kubernetes.io/projected/4b502569-ba43-46c0-95a5-aace66c7cdd0-kube-api-access-chsvg\") pod \"crc-debug-gstb8\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.543920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b502569-ba43-46c0-95a5-aace66c7cdd0-host\") pod \"crc-debug-gstb8\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.570428 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chsvg\" (UniqueName: \"kubernetes.io/projected/4b502569-ba43-46c0-95a5-aace66c7cdd0-kube-api-access-chsvg\") pod \"crc-debug-gstb8\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:44:59 crc kubenswrapper[4721]: I0128 19:44:59.627684 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.202420 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf"] Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.205224 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.207960 4721 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.210389 4721 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.216023 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf"] Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.270109 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkchx\" (UniqueName: \"kubernetes.io/projected/6917331e-4d06-44fe-89be-58526a8f9b6d-kube-api-access-nkchx\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.270626 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6917331e-4d06-44fe-89be-58526a8f9b6d-secret-volume\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.271011 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6917331e-4d06-44fe-89be-58526a8f9b6d-config-volume\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.373857 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6917331e-4d06-44fe-89be-58526a8f9b6d-config-volume\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.374738 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkchx\" (UniqueName: \"kubernetes.io/projected/6917331e-4d06-44fe-89be-58526a8f9b6d-kube-api-access-nkchx\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.374850 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6917331e-4d06-44fe-89be-58526a8f9b6d-secret-volume\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.375076 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6917331e-4d06-44fe-89be-58526a8f9b6d-config-volume\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.381653 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6917331e-4d06-44fe-89be-58526a8f9b6d-secret-volume\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.395952 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkchx\" (UniqueName: \"kubernetes.io/projected/6917331e-4d06-44fe-89be-58526a8f9b6d-kube-api-access-nkchx\") pod \"collect-profiles-29493825-jq6jf\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.527983 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.699423 4721 generic.go:334] "Generic (PLEG): container finished" podID="4b502569-ba43-46c0-95a5-aace66c7cdd0" containerID="5a8f6b8296e64ed57e93d3cb4046ec44856b697d89cc5510c1cb3f66e2ba4525" exitCode=0 Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.699803 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-gstb8" event={"ID":"4b502569-ba43-46c0-95a5-aace66c7cdd0","Type":"ContainerDied","Data":"5a8f6b8296e64ed57e93d3cb4046ec44856b697d89cc5510c1cb3f66e2ba4525"} Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.699839 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/crc-debug-gstb8" event={"ID":"4b502569-ba43-46c0-95a5-aace66c7cdd0","Type":"ContainerStarted","Data":"285129a5370fb6b2396b0a501b3de25e54d6c32ee0f84a7dc3e0e9dbc79f0ffe"} Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.784115 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-gstb8"] Jan 28 19:45:00 crc kubenswrapper[4721]: I0128 19:45:00.802545 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v5bj4/crc-debug-gstb8"] Jan 28 19:45:01 crc kubenswrapper[4721]: W0128 19:45:01.050030 4721 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6917331e_4d06_44fe_89be_58526a8f9b6d.slice/crio-99deb0f4a5b8af31a5bade786cafaea9b9d96fe3e59ffd17f8fe50ceb5fe6e43 WatchSource:0}: Error finding container 99deb0f4a5b8af31a5bade786cafaea9b9d96fe3e59ffd17f8fe50ceb5fe6e43: Status 404 returned error can't find the container with id 99deb0f4a5b8af31a5bade786cafaea9b9d96fe3e59ffd17f8fe50ceb5fe6e43 Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.055585 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf"] Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.224832 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.225252 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.225309 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.226697 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.226769 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" gracePeriod=600 Jan 28 19:45:01 crc kubenswrapper[4721]: E0128 19:45:01.355789 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:45:01 crc kubenswrapper[4721]: E0128 19:45:01.421751 4721 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e3427a4_9a03_4a08_bf7f_7a5e96290ad6.slice/crio-e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e3427a4_9a03_4a08_bf7f_7a5e96290ad6.slice/crio-conmon-e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.716547 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" event={"ID":"6917331e-4d06-44fe-89be-58526a8f9b6d","Type":"ContainerStarted","Data":"59332ca57292d483824ca33c6513722fb15c182c0ee29e0712340a4ed88c6dba"} Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.716874 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" event={"ID":"6917331e-4d06-44fe-89be-58526a8f9b6d","Type":"ContainerStarted","Data":"99deb0f4a5b8af31a5bade786cafaea9b9d96fe3e59ffd17f8fe50ceb5fe6e43"} Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.719274 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" exitCode=0 Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.719395 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97"} Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.719516 4721 scope.go:117] "RemoveContainer" containerID="6fc19c281b122451304f84c70331e9669501c7cd9edf67a7e62e191bd85b6357" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.720034 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:45:01 crc kubenswrapper[4721]: E0128 19:45:01.720398 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.744907 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" podStartSLOduration=1.744885457 podStartE2EDuration="1.744885457s" podCreationTimestamp="2026-01-28 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:45:01.736723801 +0000 UTC m=+4267.462029381" watchObservedRunningTime="2026-01-28 19:45:01.744885457 +0000 UTC m=+4267.470191017" Jan 28 19:45:01 crc kubenswrapper[4721]: I0128 19:45:01.917991 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.021607 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b502569-ba43-46c0-95a5-aace66c7cdd0-host\") pod \"4b502569-ba43-46c0-95a5-aace66c7cdd0\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.021746 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b502569-ba43-46c0-95a5-aace66c7cdd0-host" (OuterVolumeSpecName: "host") pod "4b502569-ba43-46c0-95a5-aace66c7cdd0" (UID: "4b502569-ba43-46c0-95a5-aace66c7cdd0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.022221 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chsvg\" (UniqueName: \"kubernetes.io/projected/4b502569-ba43-46c0-95a5-aace66c7cdd0-kube-api-access-chsvg\") pod \"4b502569-ba43-46c0-95a5-aace66c7cdd0\" (UID: \"4b502569-ba43-46c0-95a5-aace66c7cdd0\") " Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.022870 4721 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b502569-ba43-46c0-95a5-aace66c7cdd0-host\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.047586 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b502569-ba43-46c0-95a5-aace66c7cdd0-kube-api-access-chsvg" (OuterVolumeSpecName: "kube-api-access-chsvg") pod "4b502569-ba43-46c0-95a5-aace66c7cdd0" (UID: "4b502569-ba43-46c0-95a5-aace66c7cdd0"). InnerVolumeSpecName "kube-api-access-chsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.124781 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chsvg\" (UniqueName: \"kubernetes.io/projected/4b502569-ba43-46c0-95a5-aace66c7cdd0-kube-api-access-chsvg\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.730535 4721 generic.go:334] "Generic (PLEG): container finished" podID="6917331e-4d06-44fe-89be-58526a8f9b6d" containerID="59332ca57292d483824ca33c6513722fb15c182c0ee29e0712340a4ed88c6dba" exitCode=0 Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.730614 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" event={"ID":"6917331e-4d06-44fe-89be-58526a8f9b6d","Type":"ContainerDied","Data":"59332ca57292d483824ca33c6513722fb15c182c0ee29e0712340a4ed88c6dba"} Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.736472 4721 scope.go:117] "RemoveContainer" containerID="5a8f6b8296e64ed57e93d3cb4046ec44856b697d89cc5510c1cb3f66e2ba4525" Jan 28 19:45:02 crc kubenswrapper[4721]: I0128 19:45:02.736510 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/crc-debug-gstb8" Jan 28 19:45:03 crc kubenswrapper[4721]: I0128 19:45:03.541912 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b502569-ba43-46c0-95a5-aace66c7cdd0" path="/var/lib/kubelet/pods/4b502569-ba43-46c0-95a5-aace66c7cdd0/volumes" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.394924 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.479334 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6917331e-4d06-44fe-89be-58526a8f9b6d-config-volume\") pod \"6917331e-4d06-44fe-89be-58526a8f9b6d\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.479654 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6917331e-4d06-44fe-89be-58526a8f9b6d-secret-volume\") pod \"6917331e-4d06-44fe-89be-58526a8f9b6d\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.479711 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkchx\" (UniqueName: \"kubernetes.io/projected/6917331e-4d06-44fe-89be-58526a8f9b6d-kube-api-access-nkchx\") pod \"6917331e-4d06-44fe-89be-58526a8f9b6d\" (UID: \"6917331e-4d06-44fe-89be-58526a8f9b6d\") " Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.480597 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6917331e-4d06-44fe-89be-58526a8f9b6d-config-volume" (OuterVolumeSpecName: "config-volume") pod "6917331e-4d06-44fe-89be-58526a8f9b6d" (UID: "6917331e-4d06-44fe-89be-58526a8f9b6d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.480802 4721 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6917331e-4d06-44fe-89be-58526a8f9b6d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.488161 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6917331e-4d06-44fe-89be-58526a8f9b6d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6917331e-4d06-44fe-89be-58526a8f9b6d" (UID: "6917331e-4d06-44fe-89be-58526a8f9b6d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.503283 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6917331e-4d06-44fe-89be-58526a8f9b6d-kube-api-access-nkchx" (OuterVolumeSpecName: "kube-api-access-nkchx") pod "6917331e-4d06-44fe-89be-58526a8f9b6d" (UID: "6917331e-4d06-44fe-89be-58526a8f9b6d"). InnerVolumeSpecName "kube-api-access-nkchx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.582964 4721 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6917331e-4d06-44fe-89be-58526a8f9b6d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.583024 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkchx\" (UniqueName: \"kubernetes.io/projected/6917331e-4d06-44fe-89be-58526a8f9b6d-kube-api-access-nkchx\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.764290 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" event={"ID":"6917331e-4d06-44fe-89be-58526a8f9b6d","Type":"ContainerDied","Data":"99deb0f4a5b8af31a5bade786cafaea9b9d96fe3e59ffd17f8fe50ceb5fe6e43"} Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.764338 4721 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99deb0f4a5b8af31a5bade786cafaea9b9d96fe3e59ffd17f8fe50ceb5fe6e43" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.764402 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-jq6jf" Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.825424 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7"] Jan 28 19:45:04 crc kubenswrapper[4721]: I0128 19:45:04.835477 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-8zrf7"] Jan 28 19:45:05 crc kubenswrapper[4721]: I0128 19:45:05.545020 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70dadca2-0f02-42fd-be5f-0af5dec85996" path="/var/lib/kubelet/pods/70dadca2-0f02-42fd-be5f-0af5dec85996/volumes" Jan 28 19:45:15 crc kubenswrapper[4721]: I0128 19:45:15.539643 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:45:15 crc kubenswrapper[4721]: E0128 19:45:15.540714 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:45:22 crc kubenswrapper[4721]: I0128 19:45:22.551877 4721 scope.go:117] "RemoveContainer" containerID="2d549265a5e25e919a442b2597285571a2872abce9e354c926d45f6f8864973d" Jan 28 19:45:27 crc kubenswrapper[4721]: I0128 19:45:27.529151 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:45:27 crc kubenswrapper[4721]: E0128 19:45:27.530001 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:45:40 crc kubenswrapper[4721]: I0128 19:45:40.528870 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:45:40 crc kubenswrapper[4721]: E0128 19:45:40.529792 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:45:49 crc kubenswrapper[4721]: I0128 19:45:49.251401 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/init-config-reloader/0.log" Jan 28 19:45:49 crc kubenswrapper[4721]: I0128 19:45:49.949145 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/init-config-reloader/0.log" Jan 28 19:45:49 crc kubenswrapper[4721]: I0128 19:45:49.956456 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/alertmanager/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.033634 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_95a1b67a-adb0-42f1-9fb8-32b01c443ede/config-reloader/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.200368 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cfc4cd674-j5vfc_f8eb94ee-887b-48f2-808c-2b634928d62e/barbican-api-log/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.311781 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-cfc4cd674-j5vfc_f8eb94ee-887b-48f2-808c-2b634928d62e/barbican-api/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.446162 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f8b48b786-fcdpx_b950ce3b-33ce-40a9-9b76-45470b0917ec/barbican-keystone-listener/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.624486 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f8b48b786-fcdpx_b950ce3b-33ce-40a9-9b76-45470b0917ec/barbican-keystone-listener-log/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.649688 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7855694cbf-6fbkc_7ae24f09-1a88-4cd4-8959-76b14602141d/barbican-worker/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.719761 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7855694cbf-6fbkc_7ae24f09-1a88-4cd4-8959-76b14602141d/barbican-worker-log/0.log" Jan 28 19:45:50 crc kubenswrapper[4721]: I0128 19:45:50.861666 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-sw887_aaf9f122-a7d0-4f7f-b5d1-ee0333954fa1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:51 crc kubenswrapper[4721]: I0128 19:45:51.070858 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/ceilometer-central-agent/0.log" Jan 28 19:45:51 crc kubenswrapper[4721]: I0128 19:45:51.129453 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/ceilometer-notification-agent/0.log" Jan 28 19:45:51 crc kubenswrapper[4721]: I0128 19:45:51.206016 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/proxy-httpd/0.log" Jan 28 19:45:51 crc kubenswrapper[4721]: I0128 19:45:51.227556 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_92164365-9f87-4c26-b4c9-9d212e4aa1e1/sg-core/0.log" Jan 28 19:45:51 crc kubenswrapper[4721]: I0128 19:45:51.637748 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a5090535-3282-4e69-988d-be91fd8908a2/cinder-api/0.log" Jan 28 19:45:51 crc kubenswrapper[4721]: I0128 19:45:51.751932 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a5090535-3282-4e69-988d-be91fd8908a2/cinder-api-log/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.035039 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a3d49781-0039-466d-b00e-1d7f28598b88/probe/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.072773 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a3d49781-0039-466d-b00e-1d7f28598b88/cinder-scheduler/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.258387 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd/cloudkitty-api/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.315420 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b3b4c3c6-7e93-4ea6-878c-7c7bce6768fd/cloudkitty-api-log/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.436639 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_22863ebc-7f06-4697-a494-1e854030c803/loki-compactor/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.586374 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-gzhlc_600f989b-3ac6-4fe8-9848-6b80319e8c66/loki-distributor/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.671633 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-b6984_dffa61ba-c98d-446a-a4d0-34e1e15a093b/gateway/0.log" Jan 28 19:45:52 crc kubenswrapper[4721]: I0128 19:45:52.912397 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-t9249_ded95a77-cbf2-4db7-b6b4-56fdf518717c/gateway/0.log" Jan 28 19:45:53 crc kubenswrapper[4721]: I0128 19:45:53.122989 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_e06ee4ac-7688-41ae-b0f0-13e7cfc042e7/loki-index-gateway/0.log" Jan 28 19:45:53 crc kubenswrapper[4721]: I0128 19:45:53.319775 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_742e65f6-66eb-4334-9328-b77d47d420d0/loki-ingester/0.log" Jan 28 19:45:53 crc kubenswrapper[4721]: I0128 19:45:53.530662 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-cd79j_6be2127c-76cf-41fb-99d2-28a4e10a2b03/loki-query-frontend/0.log" Jan 28 19:45:53 crc kubenswrapper[4721]: I0128 19:45:53.550304 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-4gfwq_cd76eab6-6d1b-4d6b-9c42-3e667e081ce6/loki-querier/0.log" Jan 28 19:45:53 crc kubenswrapper[4721]: I0128 19:45:53.808276 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jqhr7_b9946ce2-5895-4b1a-ad88-c80a26d23265/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:53 crc kubenswrapper[4721]: I0128 19:45:53.931861 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jhb4n_4d206415-b580-4e09-a6f5-715ea9c2ff06/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:54 crc kubenswrapper[4721]: I0128 19:45:54.200530 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-zg2ch_a2de6f20-e053-456e-860d-c85c1ae57874/init/0.log" Jan 28 19:45:54 crc kubenswrapper[4721]: I0128 19:45:54.497637 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-zg2ch_a2de6f20-e053-456e-860d-c85c1ae57874/init/0.log" Jan 28 19:45:54 crc kubenswrapper[4721]: I0128 19:45:54.565401 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-89xnq_df3fe0a6-94e7-4233-9fb8-cecad5bc5266/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:54 crc kubenswrapper[4721]: I0128 19:45:54.572281 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-zg2ch_a2de6f20-e053-456e-860d-c85c1ae57874/dnsmasq-dns/0.log" Jan 28 19:45:54 crc kubenswrapper[4721]: I0128 19:45:54.871923 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d/glance-httpd/0.log" Jan 28 19:45:54 crc kubenswrapper[4721]: I0128 19:45:54.897976 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_08521a5c-e90f-4cf1-a64d-f0bc0bdf7b3d/glance-log/0.log" Jan 28 19:45:55 crc kubenswrapper[4721]: I0128 19:45:55.076560 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9/glance-log/0.log" Jan 28 19:45:55 crc kubenswrapper[4721]: I0128 19:45:55.229278 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-px4d2_e6d48255-8474-4c70-afc7-ddda7df2ff65/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:55 crc kubenswrapper[4721]: I0128 19:45:55.230465 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dd8b9a50-f75d-4b67-8b19-4f5d1e702cd9/glance-httpd/0.log" Jan 28 19:45:55 crc kubenswrapper[4721]: I0128 19:45:55.541838 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:45:55 crc kubenswrapper[4721]: E0128 19:45:55.542769 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:45:55 crc kubenswrapper[4721]: I0128 19:45:55.581844 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pqbq8_240f3ed6-78d3-4839-9d63-71e54d447a8a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:55 crc kubenswrapper[4721]: I0128 19:45:55.823393 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493781-lgwjg_16b77be6-6887-4534-a5e9-fc53746e8bde/keystone-cron/0.log" Jan 28 19:45:56 crc kubenswrapper[4721]: I0128 19:45:56.054370 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7cb3ca8e-a112-4fa7-a165-f987728ac08f/kube-state-metrics/0.log" Jan 28 19:45:56 crc kubenswrapper[4721]: I0128 19:45:56.055279 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7fccf8d9d-jqxpt_b596f4de-be4e-4c2a-8524-fca9afc03775/keystone-api/0.log" Jan 28 19:45:56 crc kubenswrapper[4721]: I0128 19:45:56.340363 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-s49zh_349859e1-1716-4304-9352-b9caa4c046be/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:56 crc kubenswrapper[4721]: I0128 19:45:56.915373 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787c88cc7-8262p_778b4bd0-5ac3-4a89-b5c8-07f3f52e5804/neutron-httpd/0.log" Jan 28 19:45:57 crc kubenswrapper[4721]: I0128 19:45:57.067625 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787c88cc7-8262p_778b4bd0-5ac3-4a89-b5c8-07f3f52e5804/neutron-api/0.log" Jan 28 19:45:57 crc kubenswrapper[4721]: I0128 19:45:57.176314 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xvlk4_7004522f-8584-4fca-851b-1d9f9195cb0d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:57 crc kubenswrapper[4721]: I0128 19:45:57.895582 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4898ad56-ee48-4c94-846a-cb0c2af32da7/nova-api-log/0.log" Jan 28 19:45:58 crc kubenswrapper[4721]: I0128 19:45:58.326612 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_977400c1-f351-4271-b494-25c1bd6dd31f/nova-cell0-conductor-conductor/0.log" Jan 28 19:45:58 crc kubenswrapper[4721]: I0128 19:45:58.458905 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4898ad56-ee48-4c94-846a-cb0c2af32da7/nova-api-api/0.log" Jan 28 19:45:58 crc kubenswrapper[4721]: I0128 19:45:58.690771 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d175789e-d718-4022-86ac-b8b1f9f1d40c/nova-cell1-conductor-conductor/0.log" Jan 28 19:45:58 crc kubenswrapper[4721]: I0128 19:45:58.938071 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_623ce0b7-2228-4d75-a8c3-48a837fccf46/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 19:45:59 crc kubenswrapper[4721]: I0128 19:45:59.139637 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6fthv_8dcae945-3742-46b5-b6ac-c8ff95e2946e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:45:59 crc kubenswrapper[4721]: I0128 19:45:59.349309 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f5877169-6d6b-4a83-a58d-b885ede23ffb/nova-metadata-log/0.log" Jan 28 19:45:59 crc kubenswrapper[4721]: I0128 19:45:59.947832 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ac328e3e-730d-4617-bf12-8ad6a4c5e9bf/nova-scheduler-scheduler/0.log" Jan 28 19:46:00 crc kubenswrapper[4721]: I0128 19:46:00.523453 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_00b26873-8c7a-4ea7-b334-873b01cc5d84/mysql-bootstrap/0.log" Jan 28 19:46:00 crc kubenswrapper[4721]: I0128 19:46:00.712900 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_00b26873-8c7a-4ea7-b334-873b01cc5d84/mysql-bootstrap/0.log" Jan 28 19:46:00 crc kubenswrapper[4721]: I0128 19:46:00.764700 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_00b26873-8c7a-4ea7-b334-873b01cc5d84/galera/0.log" Jan 28 19:46:01 crc kubenswrapper[4721]: I0128 19:46:01.104502 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0e740af0-cd0c-4f3e-8be1-facce1656583/mysql-bootstrap/0.log" Jan 28 19:46:01 crc kubenswrapper[4721]: I0128 19:46:01.449231 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f5877169-6d6b-4a83-a58d-b885ede23ffb/nova-metadata-metadata/0.log" Jan 28 19:46:01 crc kubenswrapper[4721]: I0128 19:46:01.471790 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0e740af0-cd0c-4f3e-8be1-facce1656583/mysql-bootstrap/0.log" Jan 28 19:46:01 crc kubenswrapper[4721]: I0128 19:46:01.518813 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0e740af0-cd0c-4f3e-8be1-facce1656583/galera/0.log" Jan 28 19:46:01 crc kubenswrapper[4721]: I0128 19:46:01.732021 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_85f51b69-4069-4da4-895c-0f92ad51506c/openstackclient/0.log" Jan 28 19:46:01 crc kubenswrapper[4721]: I0128 19:46:01.927259 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dmttf_bacb5ba4-39a7-4774-818d-67453153a34f/openstack-network-exporter/0.log" Jan 28 19:46:02 crc kubenswrapper[4721]: I0128 19:46:02.663198 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovsdb-server-init/0.log" Jan 28 19:46:02 crc kubenswrapper[4721]: I0128 19:46:02.896963 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovsdb-server/0.log" Jan 28 19:46:02 crc kubenswrapper[4721]: I0128 19:46:02.897285 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovs-vswitchd/0.log" Jan 28 19:46:02 crc kubenswrapper[4721]: I0128 19:46:02.899154 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-djsj9_88eb1b46-3d78-4f1f-b822-aa8562237980/ovsdb-server-init/0.log" Jan 28 19:46:03 crc kubenswrapper[4721]: I0128 19:46:03.114773 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sbclw_c391bae1-d3a9-4ccd-a868-d7263d9b0a28/ovn-controller/0.log" Jan 28 19:46:03 crc kubenswrapper[4721]: I0128 19:46:03.505576 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-lrdl9_445fc577-89a5-4f74-b7a4-65979c88af6b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:03 crc kubenswrapper[4721]: I0128 19:46:03.553770 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5296300e-265b-4671-a299-e023295c6981/openstack-network-exporter/0.log" Jan 28 19:46:03 crc kubenswrapper[4721]: I0128 19:46:03.797806 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5296300e-265b-4671-a299-e023295c6981/ovn-northd/0.log" Jan 28 19:46:03 crc kubenswrapper[4721]: I0128 19:46:03.857494 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4e58913-334f-484a-8e7d-e1ac86753dbe/openstack-network-exporter/0.log" Jan 28 19:46:04 crc kubenswrapper[4721]: I0128 19:46:04.040545 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4e58913-334f-484a-8e7d-e1ac86753dbe/ovsdbserver-nb/0.log" Jan 28 19:46:04 crc kubenswrapper[4721]: I0128 19:46:04.117859 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_284cf569-7d31-465c-9189-05f80f168989/openstack-network-exporter/0.log" Jan 28 19:46:04 crc kubenswrapper[4721]: I0128 19:46:04.327975 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_284cf569-7d31-465c-9189-05f80f168989/ovsdbserver-sb/0.log" Jan 28 19:46:04 crc kubenswrapper[4721]: I0128 19:46:04.457195 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7b5b4f6d96-q5gf8_7bc6f4fc-8f67-4a04-83f7-551efe61e4fe/placement-api/0.log" Jan 28 19:46:04 crc kubenswrapper[4721]: I0128 19:46:04.806818 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7b5b4f6d96-q5gf8_7bc6f4fc-8f67-4a04-83f7-551efe61e4fe/placement-log/0.log" Jan 28 19:46:04 crc kubenswrapper[4721]: I0128 19:46:04.872635 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/init-config-reloader/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.115383 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/config-reloader/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.178504 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/init-config-reloader/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.192927 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/prometheus/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.357047 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8ac81a5a-78b3-43c6-964f-300e126ba4ca/thanos-sidecar/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.444091 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a493b27e-e634-4b09-ae05-2a69c5ad0d68/setup-container/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.780700 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a493b27e-e634-4b09-ae05-2a69c5ad0d68/setup-container/0.log" Jan 28 19:46:05 crc kubenswrapper[4721]: I0128 19:46:05.821028 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a493b27e-e634-4b09-ae05-2a69c5ad0d68/rabbitmq/0.log" Jan 28 19:46:06 crc kubenswrapper[4721]: I0128 19:46:06.117099 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_88f1129c-54fc-423a-993d-560aecdd75eb/setup-container/0.log" Jan 28 19:46:06 crc kubenswrapper[4721]: I0128 19:46:06.366922 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_88f1129c-54fc-423a-993d-560aecdd75eb/setup-container/0.log" Jan 28 19:46:06 crc kubenswrapper[4721]: I0128 19:46:06.402495 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_88f1129c-54fc-423a-993d-560aecdd75eb/rabbitmq/0.log" Jan 28 19:46:06 crc kubenswrapper[4721]: I0128 19:46:06.633522 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5tdrc_5dc69ebb-35f6-4a5f-ac8a-58747df158a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:06 crc kubenswrapper[4721]: I0128 19:46:06.973699 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fbxbh_2a9cb018-b8e2-4f14-b146-2ad0b8c6f997/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:07 crc kubenswrapper[4721]: I0128 19:46:07.273247 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lmpdp_6962dcfe-fe79-48fd-af49-7b4c644856d9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:07 crc kubenswrapper[4721]: I0128 19:46:07.596712 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-hsczp_547e59b7-a7b0-4db5-b05c-cb2ed4d0ad67/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:07 crc kubenswrapper[4721]: I0128 19:46:07.702144 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4647t_7481db6a-22d8-4e79-a0fc-8dc696d5d209/ssh-known-hosts-edpm-deployment/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.063597 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6895f7fb8c-vmmw7_078d9149-2986-4e6e-a8f4-c7535613a91d/proxy-server/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.136331 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6895f7fb8c-vmmw7_078d9149-2986-4e6e-a8f4-c7535613a91d/proxy-httpd/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.377935 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7bhzw_d06bcf83-999f-419a-9f4f-4e6544576897/swift-ring-rebalance/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.559522 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-auditor/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.683966 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-reaper/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.819982 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-server/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.824921 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/account-replicator/0.log" Jan 28 19:46:08 crc kubenswrapper[4721]: I0128 19:46:08.986209 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-auditor/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.071598 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-server/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.116183 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-replicator/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.297036 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/container-updater/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.346514 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-expirer/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.370221 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-auditor/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.529572 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:46:09 crc kubenswrapper[4721]: E0128 19:46:09.529977 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.744416 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-updater/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.756835 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-replicator/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.789368 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/object-server/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.967141 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/rsync/0.log" Jan 28 19:46:09 crc kubenswrapper[4721]: I0128 19:46:09.997092 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa657a81-842e-4292-a71e-e208b4c0bd69/swift-recon-cron/0.log" Jan 28 19:46:11 crc kubenswrapper[4721]: I0128 19:46:11.038053 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-28zzx_1e117cf9-a997-4596-9334-0edb394b7fed/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:11 crc kubenswrapper[4721]: I0128 19:46:11.049263 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5e586424-d1f9-4f72-9dc8-f046e2f235f5/tempest-tests-tempest-tests-runner/0.log" Jan 28 19:46:11 crc kubenswrapper[4721]: I0128 19:46:11.267078 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_512eb22d-5ddf-419c-aa72-60dea50ecc6d/test-operator-logs-container/0.log" Jan 28 19:46:11 crc kubenswrapper[4721]: I0128 19:46:11.546570 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-pkwbl_e3cd0640-8d09-4743-8e9e-cc3914803f8c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 19:46:15 crc kubenswrapper[4721]: I0128 19:46:15.110149 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_52682601-9d4b-4b45-a1e0-7143e9a31e7a/cloudkitty-proc/0.log" Jan 28 19:46:16 crc kubenswrapper[4721]: I0128 19:46:16.615645 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7be00819-ddfd-47d6-a7fc-430607636883/memcached/0.log" Jan 28 19:46:23 crc kubenswrapper[4721]: I0128 19:46:23.529527 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:46:23 crc kubenswrapper[4721]: E0128 19:46:23.530439 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:46:35 crc kubenswrapper[4721]: I0128 19:46:35.537789 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:46:35 crc kubenswrapper[4721]: E0128 19:46:35.538885 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:46:47 crc kubenswrapper[4721]: I0128 19:46:47.528486 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:46:47 crc kubenswrapper[4721]: E0128 19:46:47.529210 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:46:47 crc kubenswrapper[4721]: I0128 19:46:47.938662 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/util/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.183797 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/util/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.204210 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/pull/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.210525 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/pull/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.419113 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/util/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.445653 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/pull/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.457721 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ff8d4f1d1a2b9f947c76cb0859a1bcccc65b2094fb24a26aaef048927k7bl2_ab608a64-70fd-498e-9aa6-d2dd87a017b9/extract/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.726479 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6bc7f4f4cf-pv6ph_99e08199-2cc8-4f41-8310-f63c0a021a98/manager/0.log" Jan 28 19:46:48 crc kubenswrapper[4721]: I0128 19:46:48.772364 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-f6487bd57-c9pmg_d258bf47-a441-49ad-a3ad-d5c04c615c9c/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.012835 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66dfbd6f5d-dbf9z_5f5dbe82-6a18-47da-98e6-00d10a32d1eb/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.228901 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-7brt7_6e4d4bd0-d6ac-4268-bc08-86d74adfc33b/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.300748 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-587c6bfdcf-r46mm_6ec8e4f3-a711-43af-81da-91be5695e927/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.403664 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-6m2fr_18c18118-f643-4590-9e07-87bffdb4195b/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.624627 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-958664b5-wrzbl_7650ad3f-87f7-4c9a-b795-678ebc7edc7d/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.803745 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-fd75h_66d34dd5-6c67-40ec-8fc8-16320a5aef1d/manager/0.log" Jan 28 19:46:49 crc kubenswrapper[4721]: I0128 19:46:49.993619 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6978b79747-vc75z_e8f6f9a2-7886-4896-baac-268e88869bb2/manager/0.log" Jan 28 19:46:50 crc kubenswrapper[4721]: I0128 19:46:50.111773 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-765668569f-mjxvn_835d5df3-4ea1-40ce-9bad-325396bfd41f/manager/0.log" Jan 28 19:46:50 crc kubenswrapper[4721]: I0128 19:46:50.266296 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-pt757_f901f512-8af4-4e6c-abc8-0fd7d0f26ef3/manager/0.log" Jan 28 19:46:50 crc kubenswrapper[4721]: I0128 19:46:50.472950 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-694c5bfc85-hv7r4_b102209d-5846-40f2-bb20-7022d18b9a28/manager/0.log" Jan 28 19:46:50 crc kubenswrapper[4721]: I0128 19:46:50.587987 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-ghpgf_8e4e395a-5b06-45ea-a2af-8a7a1180fc80/manager/0.log" Jan 28 19:46:50 crc kubenswrapper[4721]: I0128 19:46:50.704278 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5c765b4558-r996h_073e6433-4ca4-499a-8c82-0fda8211ecd3/manager/0.log" Jan 28 19:46:50 crc kubenswrapper[4721]: I0128 19:46:50.815487 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dxmjl8_4bc4914a-125f-48f5-a7df-dbc170eaddd9/manager/0.log" Jan 28 19:46:51 crc kubenswrapper[4721]: I0128 19:46:51.084003 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-858cbdb9cd-v7bpd_d2642d34-9e91-460a-a889-42776f2201cc/operator/0.log" Jan 28 19:46:51 crc kubenswrapper[4721]: I0128 19:46:51.419370 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ckq4p_7e87d639-6eae-44a0-9005-9e5fb2b60b0c/registry-server/0.log" Jan 28 19:46:51 crc kubenswrapper[4721]: I0128 19:46:51.618039 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-js7f2_2cea4626-d7bc-4166-9c63-8aa4e6358bd3/manager/0.log" Jan 28 19:46:52 crc kubenswrapper[4721]: I0128 19:46:52.456258 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-798d8549d8-ztjwv_23d3546b-cba0-4c15-a8b0-de9cced9fdf8/manager/0.log" Jan 28 19:46:52 crc kubenswrapper[4721]: I0128 19:46:52.537153 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-gdb9m_9c28be52-26d0-4dd5-a3ca-ba3d9888dae8/manager/0.log" Jan 28 19:46:52 crc kubenswrapper[4721]: I0128 19:46:52.597432 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vprhw_a39fc394-2b18-4c7c-a780-0147ddb3a77a/operator/0.log" Jan 28 19:46:52 crc kubenswrapper[4721]: I0128 19:46:52.818537 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-9sqtl_021232bf-9e53-4907-80a0-702807db3f23/manager/0.log" Jan 28 19:46:52 crc kubenswrapper[4721]: I0128 19:46:52.840052 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-f56rw_066c13ce-1239-494e-bbc6-d175c62c501c/manager/0.log" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.168814 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-tkgcv_b9bc0b6e-0f12-46b4-86c3-c9f56dcfa5d6/manager/0.log" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.204430 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-877d65859-2rn2n_83f4e7da-0144-44a8-886e-7f8c60f56014/manager/0.log" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.955438 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bfrtv"] Jan 28 19:46:53 crc kubenswrapper[4721]: E0128 19:46:53.956020 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b502569-ba43-46c0-95a5-aace66c7cdd0" containerName="container-00" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.956033 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b502569-ba43-46c0-95a5-aace66c7cdd0" containerName="container-00" Jan 28 19:46:53 crc kubenswrapper[4721]: E0128 19:46:53.956046 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6917331e-4d06-44fe-89be-58526a8f9b6d" containerName="collect-profiles" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.956052 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="6917331e-4d06-44fe-89be-58526a8f9b6d" containerName="collect-profiles" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.956277 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="6917331e-4d06-44fe-89be-58526a8f9b6d" containerName="collect-profiles" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.956300 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b502569-ba43-46c0-95a5-aace66c7cdd0" containerName="container-00" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.957973 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.983935 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfrtv"] Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.992961 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzlp\" (UniqueName: \"kubernetes.io/projected/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-kube-api-access-dwzlp\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.993135 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-catalog-content\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:53 crc kubenswrapper[4721]: I0128 19:46:53.993233 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-utilities\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.095746 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzlp\" (UniqueName: \"kubernetes.io/projected/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-kube-api-access-dwzlp\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.096141 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-catalog-content\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.096244 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-utilities\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.096591 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-catalog-content\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.096677 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-utilities\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.120079 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzlp\" (UniqueName: \"kubernetes.io/projected/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-kube-api-access-dwzlp\") pod \"community-operators-bfrtv\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.282697 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:46:54 crc kubenswrapper[4721]: I0128 19:46:54.959839 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfrtv"] Jan 28 19:46:55 crc kubenswrapper[4721]: I0128 19:46:55.065849 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerStarted","Data":"9a06036dd974065fc9197814659704b725b95290733242e96da33829fc3eac07"} Jan 28 19:46:56 crc kubenswrapper[4721]: I0128 19:46:56.084622 4721 generic.go:334] "Generic (PLEG): container finished" podID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerID="ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72" exitCode=0 Jan 28 19:46:56 crc kubenswrapper[4721]: I0128 19:46:56.084980 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerDied","Data":"ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72"} Jan 28 19:46:56 crc kubenswrapper[4721]: I0128 19:46:56.088591 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:46:57 crc kubenswrapper[4721]: I0128 19:46:57.098715 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerStarted","Data":"56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace"} Jan 28 19:46:59 crc kubenswrapper[4721]: I0128 19:46:59.119727 4721 generic.go:334] "Generic (PLEG): container finished" podID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerID="56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace" exitCode=0 Jan 28 19:46:59 crc kubenswrapper[4721]: I0128 19:46:59.119795 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerDied","Data":"56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace"} Jan 28 19:47:00 crc kubenswrapper[4721]: I0128 19:47:00.137457 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerStarted","Data":"a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d"} Jan 28 19:47:00 crc kubenswrapper[4721]: I0128 19:47:00.166098 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bfrtv" podStartSLOduration=3.597070805 podStartE2EDuration="7.166065352s" podCreationTimestamp="2026-01-28 19:46:53 +0000 UTC" firstStartedPulling="2026-01-28 19:46:56.088220315 +0000 UTC m=+4381.813525875" lastFinishedPulling="2026-01-28 19:46:59.657214862 +0000 UTC m=+4385.382520422" observedRunningTime="2026-01-28 19:47:00.156615186 +0000 UTC m=+4385.881920756" watchObservedRunningTime="2026-01-28 19:47:00.166065352 +0000 UTC m=+4385.891370912" Jan 28 19:47:02 crc kubenswrapper[4721]: I0128 19:47:02.529106 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:47:02 crc kubenswrapper[4721]: E0128 19:47:02.529918 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:47:04 crc kubenswrapper[4721]: I0128 19:47:04.283273 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:47:04 crc kubenswrapper[4721]: I0128 19:47:04.283569 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:47:05 crc kubenswrapper[4721]: I0128 19:47:05.143288 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:47:05 crc kubenswrapper[4721]: I0128 19:47:05.240538 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:47:05 crc kubenswrapper[4721]: I0128 19:47:05.393895 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfrtv"] Jan 28 19:47:07 crc kubenswrapper[4721]: I0128 19:47:07.217266 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bfrtv" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="registry-server" containerID="cri-o://a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d" gracePeriod=2 Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.063396 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.172471 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-utilities\") pod \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.172637 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwzlp\" (UniqueName: \"kubernetes.io/projected/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-kube-api-access-dwzlp\") pod \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.172699 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-catalog-content\") pod \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\" (UID: \"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b\") " Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.173768 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-utilities" (OuterVolumeSpecName: "utilities") pod "d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" (UID: "d0d9d69a-6803-4b0f-8cde-d3fd15cba92b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.179506 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-kube-api-access-dwzlp" (OuterVolumeSpecName: "kube-api-access-dwzlp") pod "d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" (UID: "d0d9d69a-6803-4b0f-8cde-d3fd15cba92b"). InnerVolumeSpecName "kube-api-access-dwzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.230322 4721 generic.go:334] "Generic (PLEG): container finished" podID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerID="a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d" exitCode=0 Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.230374 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerDied","Data":"a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d"} Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.230401 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfrtv" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.230425 4721 scope.go:117] "RemoveContainer" containerID="a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.230410 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfrtv" event={"ID":"d0d9d69a-6803-4b0f-8cde-d3fd15cba92b","Type":"ContainerDied","Data":"9a06036dd974065fc9197814659704b725b95290733242e96da33829fc3eac07"} Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.254621 4721 scope.go:117] "RemoveContainer" containerID="56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.276051 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.276101 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwzlp\" (UniqueName: \"kubernetes.io/projected/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-kube-api-access-dwzlp\") on node \"crc\" DevicePath \"\"" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.291999 4721 scope.go:117] "RemoveContainer" containerID="ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.337708 4721 scope.go:117] "RemoveContainer" containerID="a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d" Jan 28 19:47:08 crc kubenswrapper[4721]: E0128 19:47:08.338691 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d\": container with ID starting with a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d not found: ID does not exist" containerID="a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.338727 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d"} err="failed to get container status \"a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d\": rpc error: code = NotFound desc = could not find container \"a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d\": container with ID starting with a7a47a73e034eea36b0be6663334b998002d9de063ed1464f617541584394e3d not found: ID does not exist" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.338752 4721 scope.go:117] "RemoveContainer" containerID="56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace" Jan 28 19:47:08 crc kubenswrapper[4721]: E0128 19:47:08.339613 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace\": container with ID starting with 56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace not found: ID does not exist" containerID="56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.339675 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace"} err="failed to get container status \"56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace\": rpc error: code = NotFound desc = could not find container \"56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace\": container with ID starting with 56a29fb771027d6e5b45fed044324d817831c2505d82faeb0ba7aa5f65217ace not found: ID does not exist" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.339715 4721 scope.go:117] "RemoveContainer" containerID="ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72" Jan 28 19:47:08 crc kubenswrapper[4721]: E0128 19:47:08.340188 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72\": container with ID starting with ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72 not found: ID does not exist" containerID="ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72" Jan 28 19:47:08 crc kubenswrapper[4721]: I0128 19:47:08.340214 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72"} err="failed to get container status \"ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72\": rpc error: code = NotFound desc = could not find container \"ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72\": container with ID starting with ae21d65f2a316d227d2015497c66ea9cfeb27271ff068babe64533b0f0288e72 not found: ID does not exist" Jan 28 19:47:09 crc kubenswrapper[4721]: I0128 19:47:09.006773 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" (UID: "d0d9d69a-6803-4b0f-8cde-d3fd15cba92b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:47:09 crc kubenswrapper[4721]: I0128 19:47:09.100223 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:47:09 crc kubenswrapper[4721]: I0128 19:47:09.178654 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfrtv"] Jan 28 19:47:09 crc kubenswrapper[4721]: I0128 19:47:09.194257 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bfrtv"] Jan 28 19:47:09 crc kubenswrapper[4721]: I0128 19:47:09.542962 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" path="/var/lib/kubelet/pods/d0d9d69a-6803-4b0f-8cde-d3fd15cba92b/volumes" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.542440 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gc9m9"] Jan 28 19:47:12 crc kubenswrapper[4721]: E0128 19:47:12.544276 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="extract-utilities" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.544307 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="extract-utilities" Jan 28 19:47:12 crc kubenswrapper[4721]: E0128 19:47:12.544337 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="extract-content" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.544345 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="extract-content" Jan 28 19:47:12 crc kubenswrapper[4721]: E0128 19:47:12.544377 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="registry-server" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.544391 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="registry-server" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.545584 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d9d69a-6803-4b0f-8cde-d3fd15cba92b" containerName="registry-server" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.551918 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.587916 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gc9m9"] Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.692551 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-catalog-content\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.692955 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-utilities\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.693032 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qswnl\" (UniqueName: \"kubernetes.io/projected/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-kube-api-access-qswnl\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.794878 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-catalog-content\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.795018 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-utilities\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.795117 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qswnl\" (UniqueName: \"kubernetes.io/projected/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-kube-api-access-qswnl\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.795503 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-catalog-content\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.795920 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-utilities\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.816907 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qswnl\" (UniqueName: \"kubernetes.io/projected/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-kube-api-access-qswnl\") pod \"certified-operators-gc9m9\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:12 crc kubenswrapper[4721]: I0128 19:47:12.893701 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:13 crc kubenswrapper[4721]: I0128 19:47:13.454793 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gc9m9"] Jan 28 19:47:14 crc kubenswrapper[4721]: I0128 19:47:14.309645 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerStarted","Data":"85c79bc32733bca140f5a7d41a3d9f1c5fabb6526e51c78edf128c0abefaa347"} Jan 28 19:47:15 crc kubenswrapper[4721]: I0128 19:47:15.324833 4721 generic.go:334] "Generic (PLEG): container finished" podID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerID="3dcc5b68f54defae88411c3e102bc06469beba4120670ead207ef3d8198ba391" exitCode=0 Jan 28 19:47:15 crc kubenswrapper[4721]: I0128 19:47:15.325053 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerDied","Data":"3dcc5b68f54defae88411c3e102bc06469beba4120670ead207ef3d8198ba391"} Jan 28 19:47:16 crc kubenswrapper[4721]: I0128 19:47:16.337094 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerStarted","Data":"fb8804246a70f20066a0c0a3bc5a310de516fa321bd674edfd17be7318b7648f"} Jan 28 19:47:17 crc kubenswrapper[4721]: I0128 19:47:17.528857 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:47:17 crc kubenswrapper[4721]: E0128 19:47:17.529401 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:47:18 crc kubenswrapper[4721]: I0128 19:47:18.359554 4721 generic.go:334] "Generic (PLEG): container finished" podID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerID="fb8804246a70f20066a0c0a3bc5a310de516fa321bd674edfd17be7318b7648f" exitCode=0 Jan 28 19:47:18 crc kubenswrapper[4721]: I0128 19:47:18.359610 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerDied","Data":"fb8804246a70f20066a0c0a3bc5a310de516fa321bd674edfd17be7318b7648f"} Jan 28 19:47:19 crc kubenswrapper[4721]: I0128 19:47:19.372878 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerStarted","Data":"5c75278e465af038f0cb2dff8157cedb3d66470eff118680ffa6cd5ef4375e89"} Jan 28 19:47:19 crc kubenswrapper[4721]: I0128 19:47:19.403208 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gc9m9" podStartSLOduration=3.974364252 podStartE2EDuration="7.403180146s" podCreationTimestamp="2026-01-28 19:47:12 +0000 UTC" firstStartedPulling="2026-01-28 19:47:15.327365253 +0000 UTC m=+4401.052670833" lastFinishedPulling="2026-01-28 19:47:18.756181167 +0000 UTC m=+4404.481486727" observedRunningTime="2026-01-28 19:47:19.393206482 +0000 UTC m=+4405.118512052" watchObservedRunningTime="2026-01-28 19:47:19.403180146 +0000 UTC m=+4405.128485706" Jan 28 19:47:22 crc kubenswrapper[4721]: I0128 19:47:22.895660 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:22 crc kubenswrapper[4721]: I0128 19:47:22.896010 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:22 crc kubenswrapper[4721]: I0128 19:47:22.990195 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:23 crc kubenswrapper[4721]: I0128 19:47:23.008061 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jrjx5_8355e616-674b-4bc2-a727-76609df63630/control-plane-machine-set-operator/0.log" Jan 28 19:47:23 crc kubenswrapper[4721]: I0128 19:47:23.229413 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g474w_49007c72-1df2-49db-9bbb-c90ee8207149/kube-rbac-proxy/0.log" Jan 28 19:47:23 crc kubenswrapper[4721]: I0128 19:47:23.335589 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g474w_49007c72-1df2-49db-9bbb-c90ee8207149/machine-api-operator/0.log" Jan 28 19:47:23 crc kubenswrapper[4721]: I0128 19:47:23.925373 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:24 crc kubenswrapper[4721]: I0128 19:47:24.233793 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gc9m9"] Jan 28 19:47:25 crc kubenswrapper[4721]: I0128 19:47:25.435151 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gc9m9" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="registry-server" containerID="cri-o://5c75278e465af038f0cb2dff8157cedb3d66470eff118680ffa6cd5ef4375e89" gracePeriod=2 Jan 28 19:47:26 crc kubenswrapper[4721]: I0128 19:47:26.479517 4721 generic.go:334] "Generic (PLEG): container finished" podID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerID="5c75278e465af038f0cb2dff8157cedb3d66470eff118680ffa6cd5ef4375e89" exitCode=0 Jan 28 19:47:26 crc kubenswrapper[4721]: I0128 19:47:26.479902 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerDied","Data":"5c75278e465af038f0cb2dff8157cedb3d66470eff118680ffa6cd5ef4375e89"} Jan 28 19:47:26 crc kubenswrapper[4721]: I0128 19:47:26.962815 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.071836 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-utilities\") pod \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.071904 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qswnl\" (UniqueName: \"kubernetes.io/projected/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-kube-api-access-qswnl\") pod \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.072093 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-catalog-content\") pod \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\" (UID: \"4dc0adbf-87fe-46ea-8d48-499b0a2e8533\") " Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.073136 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-utilities" (OuterVolumeSpecName: "utilities") pod "4dc0adbf-87fe-46ea-8d48-499b0a2e8533" (UID: "4dc0adbf-87fe-46ea-8d48-499b0a2e8533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.117250 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-kube-api-access-qswnl" (OuterVolumeSpecName: "kube-api-access-qswnl") pod "4dc0adbf-87fe-46ea-8d48-499b0a2e8533" (UID: "4dc0adbf-87fe-46ea-8d48-499b0a2e8533"). InnerVolumeSpecName "kube-api-access-qswnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.133518 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dc0adbf-87fe-46ea-8d48-499b0a2e8533" (UID: "4dc0adbf-87fe-46ea-8d48-499b0a2e8533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.175945 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.176338 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.176441 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qswnl\" (UniqueName: \"kubernetes.io/projected/4dc0adbf-87fe-46ea-8d48-499b0a2e8533-kube-api-access-qswnl\") on node \"crc\" DevicePath \"\"" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.504441 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc9m9" event={"ID":"4dc0adbf-87fe-46ea-8d48-499b0a2e8533","Type":"ContainerDied","Data":"85c79bc32733bca140f5a7d41a3d9f1c5fabb6526e51c78edf128c0abefaa347"} Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.505641 4721 scope.go:117] "RemoveContainer" containerID="5c75278e465af038f0cb2dff8157cedb3d66470eff118680ffa6cd5ef4375e89" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.504719 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc9m9" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.536558 4721 scope.go:117] "RemoveContainer" containerID="fb8804246a70f20066a0c0a3bc5a310de516fa321bd674edfd17be7318b7648f" Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.554402 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gc9m9"] Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.562387 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gc9m9"] Jan 28 19:47:27 crc kubenswrapper[4721]: I0128 19:47:27.573794 4721 scope.go:117] "RemoveContainer" containerID="3dcc5b68f54defae88411c3e102bc06469beba4120670ead207ef3d8198ba391" Jan 28 19:47:29 crc kubenswrapper[4721]: I0128 19:47:29.543320 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" path="/var/lib/kubelet/pods/4dc0adbf-87fe-46ea-8d48-499b0a2e8533/volumes" Jan 28 19:47:31 crc kubenswrapper[4721]: I0128 19:47:31.529624 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:47:31 crc kubenswrapper[4721]: E0128 19:47:31.530407 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:47:43 crc kubenswrapper[4721]: I0128 19:47:43.329905 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xxzt6_c68c41d8-39c1-417b-a4ba-dafeb3762c32/cert-manager-controller/0.log" Jan 28 19:47:43 crc kubenswrapper[4721]: I0128 19:47:43.594973 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f66kh_f637f152-a40b-45ff-989f-f82ad65b2066/cert-manager-cainjector/0.log" Jan 28 19:47:43 crc kubenswrapper[4721]: I0128 19:47:43.728585 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l5dj9_12d309c4-9049-41c8-be1f-8f0e422ab186/cert-manager-webhook/0.log" Jan 28 19:47:45 crc kubenswrapper[4721]: I0128 19:47:45.543508 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:47:45 crc kubenswrapper[4721]: E0128 19:47:45.544258 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:47:59 crc kubenswrapper[4721]: I0128 19:47:59.529660 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:47:59 crc kubenswrapper[4721]: E0128 19:47:59.530664 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:48:02 crc kubenswrapper[4721]: I0128 19:48:02.657877 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qxhd9_c7b54106-b20d-4911-a9e2-90d5539bb4d7/nmstate-console-plugin/0.log" Jan 28 19:48:02 crc kubenswrapper[4721]: I0128 19:48:02.815475 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4wqcf_cf95e16e-0533-4d53-a185-3c62adb9e573/nmstate-handler/0.log" Jan 28 19:48:02 crc kubenswrapper[4721]: I0128 19:48:02.860947 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rwjnr_fda999b5-6a00-4137-817e-b7d5417a2d2e/kube-rbac-proxy/0.log" Jan 28 19:48:02 crc kubenswrapper[4721]: I0128 19:48:02.912339 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rwjnr_fda999b5-6a00-4137-817e-b7d5417a2d2e/nmstate-metrics/0.log" Jan 28 19:48:03 crc kubenswrapper[4721]: I0128 19:48:03.069560 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-26llr_2498df5a-d126-45bd-b53b-9beeedc256b7/nmstate-operator/0.log" Jan 28 19:48:03 crc kubenswrapper[4721]: I0128 19:48:03.113699 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-9rp4b_a8abeaa3-e685-4caa-b32c-cc0a40dfdb8b/nmstate-webhook/0.log" Jan 28 19:48:13 crc kubenswrapper[4721]: I0128 19:48:13.528757 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:48:13 crc kubenswrapper[4721]: E0128 19:48:13.529720 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:48:20 crc kubenswrapper[4721]: I0128 19:48:20.794829 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/kube-rbac-proxy/0.log" Jan 28 19:48:20 crc kubenswrapper[4721]: I0128 19:48:20.865316 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/manager/0.log" Jan 28 19:48:24 crc kubenswrapper[4721]: I0128 19:48:24.528904 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:48:24 crc kubenswrapper[4721]: E0128 19:48:24.529909 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:48:36 crc kubenswrapper[4721]: I0128 19:48:36.974143 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-424xn_cd50289b-aa27-438d-89a2-405552dbadf7/prometheus-operator/0.log" Jan 28 19:48:37 crc kubenswrapper[4721]: I0128 19:48:37.529560 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:48:37 crc kubenswrapper[4721]: E0128 19:48:37.529958 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:48:37 crc kubenswrapper[4721]: I0128 19:48:37.687997 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_8b291a65-1dc7-4312-a429-60bb0a86800d/prometheus-operator-admission-webhook/0.log" Jan 28 19:48:37 crc kubenswrapper[4721]: I0128 19:48:37.784280 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_e3cb407f-4a19-4f81-b388-4db383b55701/prometheus-operator-admission-webhook/0.log" Jan 28 19:48:38 crc kubenswrapper[4721]: I0128 19:48:38.040405 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fqs7q_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117/perses-operator/0.log" Jan 28 19:48:38 crc kubenswrapper[4721]: I0128 19:48:38.047806 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-bdm2v_ab955356-2884-4e1b-9dfc-966a662c4095/operator/0.log" Jan 28 19:48:49 crc kubenswrapper[4721]: I0128 19:48:49.533445 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:48:49 crc kubenswrapper[4721]: E0128 19:48:49.534430 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:48:54 crc kubenswrapper[4721]: I0128 19:48:54.800728 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7rcs7_c251c48b-fe6b-484b-9ff7-60faab8d13b5/controller/0.log" Jan 28 19:48:54 crc kubenswrapper[4721]: I0128 19:48:54.858993 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7rcs7_c251c48b-fe6b-484b-9ff7-60faab8d13b5/kube-rbac-proxy/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.048280 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.249846 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.254756 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.327705 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.334888 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.587048 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.587117 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.589547 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.626802 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.829875 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-reloader/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.864095 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-frr-files/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.899230 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/cp-metrics/0.log" Jan 28 19:48:55 crc kubenswrapper[4721]: I0128 19:48:55.907303 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/controller/0.log" Jan 28 19:48:56 crc kubenswrapper[4721]: I0128 19:48:56.176761 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/kube-rbac-proxy-frr/0.log" Jan 28 19:48:56 crc kubenswrapper[4721]: I0128 19:48:56.186870 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/frr-metrics/0.log" Jan 28 19:48:56 crc kubenswrapper[4721]: I0128 19:48:56.187618 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/kube-rbac-proxy/0.log" Jan 28 19:48:56 crc kubenswrapper[4721]: I0128 19:48:56.557866 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/reloader/0.log" Jan 28 19:48:56 crc kubenswrapper[4721]: I0128 19:48:56.605809 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-9xvzd_514e6881-7399-4848-bb65-7851e1e3b079/frr-k8s-webhook-server/0.log" Jan 28 19:48:56 crc kubenswrapper[4721]: I0128 19:48:56.922079 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79d44b6d7b-q852t_fbde7afa-5af9-462b-b402-352513fb9655/manager/0.log" Jan 28 19:48:57 crc kubenswrapper[4721]: I0128 19:48:57.071435 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7689b8f645-b5mcc_690709f2-5507-45e6-8897-380890c19e6f/webhook-server/0.log" Jan 28 19:48:57 crc kubenswrapper[4721]: I0128 19:48:57.246754 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k5dbx_4d13a423-7c09-4fae-b239-e376e8487d85/kube-rbac-proxy/0.log" Jan 28 19:48:57 crc kubenswrapper[4721]: I0128 19:48:57.802454 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-94kms_ef7fdb4c-b45c-44c4-bde8-d1bd9ae3c6cb/frr/0.log" Jan 28 19:48:57 crc kubenswrapper[4721]: I0128 19:48:57.828518 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k5dbx_4d13a423-7c09-4fae-b239-e376e8487d85/speaker/0.log" Jan 28 19:49:01 crc kubenswrapper[4721]: I0128 19:49:01.531316 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:49:01 crc kubenswrapper[4721]: E0128 19:49:01.533315 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:49:12 crc kubenswrapper[4721]: I0128 19:49:12.528698 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:49:12 crc kubenswrapper[4721]: E0128 19:49:12.529660 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:49:13 crc kubenswrapper[4721]: I0128 19:49:13.498191 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/util/0.log" Jan 28 19:49:13 crc kubenswrapper[4721]: I0128 19:49:13.856840 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/pull/0.log" Jan 28 19:49:13 crc kubenswrapper[4721]: I0128 19:49:13.894562 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/util/0.log" Jan 28 19:49:13 crc kubenswrapper[4721]: I0128 19:49:13.895041 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/pull/0.log" Jan 28 19:49:14 crc kubenswrapper[4721]: I0128 19:49:14.234517 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/pull/0.log" Jan 28 19:49:14 crc kubenswrapper[4721]: I0128 19:49:14.242032 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/util/0.log" Jan 28 19:49:14 crc kubenswrapper[4721]: I0128 19:49:14.340945 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczk9b6_2c37d643-cddf-40c7-ad82-e999634e0151/extract/0.log" Jan 28 19:49:14 crc kubenswrapper[4721]: I0128 19:49:14.528597 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/util/0.log" Jan 28 19:49:14 crc kubenswrapper[4721]: I0128 19:49:14.824875 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/pull/0.log" Jan 28 19:49:15 crc kubenswrapper[4721]: I0128 19:49:15.002766 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/util/0.log" Jan 28 19:49:15 crc kubenswrapper[4721]: I0128 19:49:15.045321 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/pull/0.log" Jan 28 19:49:15 crc kubenswrapper[4721]: I0128 19:49:15.228529 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/util/0.log" Jan 28 19:49:15 crc kubenswrapper[4721]: I0128 19:49:15.257470 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/pull/0.log" Jan 28 19:49:15 crc kubenswrapper[4721]: I0128 19:49:15.332550 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773w2zgc_28e082c3-f662-4caa-be33-4bf2cc234ca7/extract/0.log" Jan 28 19:49:15 crc kubenswrapper[4721]: I0128 19:49:15.405901 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/util/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.059770 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/pull/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.169008 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/pull/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.181858 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/util/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.319315 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/util/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.368087 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/pull/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.451756 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713gz9z8_f0d234c7-c326-453d-aef0-f50829390a73/extract/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.552987 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/util/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.812554 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/util/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.866099 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/pull/0.log" Jan 28 19:49:16 crc kubenswrapper[4721]: I0128 19:49:16.897315 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/pull/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.353488 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/pull/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.362577 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/util/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.397103 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0852wht_e3e10f04-ed38-4461-a28c-b53f458cd84d/extract/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.574914 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-utilities/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.839126 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-utilities/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.839784 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-content/0.log" Jan 28 19:49:17 crc kubenswrapper[4721]: I0128 19:49:17.846633 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-content/0.log" Jan 28 19:49:18 crc kubenswrapper[4721]: I0128 19:49:18.063300 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-utilities/0.log" Jan 28 19:49:18 crc kubenswrapper[4721]: I0128 19:49:18.261033 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/extract-content/0.log" Jan 28 19:49:18 crc kubenswrapper[4721]: I0128 19:49:18.598414 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-utilities/0.log" Jan 28 19:49:18 crc kubenswrapper[4721]: I0128 19:49:18.669664 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6zmqq_73a3f613-b50c-4873-b63e-78983b1c60af/registry-server/0.log" Jan 28 19:49:19 crc kubenswrapper[4721]: I0128 19:49:19.348241 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-content/0.log" Jan 28 19:49:19 crc kubenswrapper[4721]: I0128 19:49:19.376492 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-content/0.log" Jan 28 19:49:19 crc kubenswrapper[4721]: I0128 19:49:19.397504 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-utilities/0.log" Jan 28 19:49:19 crc kubenswrapper[4721]: I0128 19:49:19.629096 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-content/0.log" Jan 28 19:49:19 crc kubenswrapper[4721]: I0128 19:49:19.760767 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/extract-utilities/0.log" Jan 28 19:49:19 crc kubenswrapper[4721]: I0128 19:49:19.831865 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dk9tw_c24ece18-1c22-49c3-ae82-e63bdc44ab1f/marketplace-operator/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.075392 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-utilities/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.424605 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nm7c2_53ad6fb5-bf3c-4da3-af1c-72c1d1fa0bfe/registry-server/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.474444 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-content/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.571693 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-utilities/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.571794 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-content/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.838850 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-utilities/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.926783 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/extract-content/0.log" Jan 28 19:49:20 crc kubenswrapper[4721]: I0128 19:49:20.985301 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-utilities/0.log" Jan 28 19:49:21 crc kubenswrapper[4721]: I0128 19:49:21.075533 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7trfs_13c22ad4-c5a1-4e52-accb-81598f08a144/registry-server/0.log" Jan 28 19:49:21 crc kubenswrapper[4721]: I0128 19:49:21.367205 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-content/0.log" Jan 28 19:49:21 crc kubenswrapper[4721]: I0128 19:49:21.386422 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-content/0.log" Jan 28 19:49:21 crc kubenswrapper[4721]: I0128 19:49:21.448677 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-utilities/0.log" Jan 28 19:49:21 crc kubenswrapper[4721]: I0128 19:49:21.626821 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-utilities/0.log" Jan 28 19:49:21 crc kubenswrapper[4721]: I0128 19:49:21.682787 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/extract-content/0.log" Jan 28 19:49:22 crc kubenswrapper[4721]: I0128 19:49:22.137519 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdtqb_025f6d5f-7086-4108-823a-10ef1b8b608d/registry-server/0.log" Jan 28 19:49:26 crc kubenswrapper[4721]: I0128 19:49:26.529646 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:49:26 crc kubenswrapper[4721]: E0128 19:49:26.530663 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:49:38 crc kubenswrapper[4721]: I0128 19:49:38.528995 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:49:38 crc kubenswrapper[4721]: E0128 19:49:38.529862 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:49:41 crc kubenswrapper[4721]: I0128 19:49:41.273060 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-424xn_cd50289b-aa27-438d-89a2-405552dbadf7/prometheus-operator/0.log" Jan 28 19:49:41 crc kubenswrapper[4721]: I0128 19:49:41.375737 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-sbgj9_e3cb407f-4a19-4f81-b388-4db383b55701/prometheus-operator-admission-webhook/0.log" Jan 28 19:49:41 crc kubenswrapper[4721]: I0128 19:49:41.411872 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65d8d94d66-h2rfr_8b291a65-1dc7-4312-a429-60bb0a86800d/prometheus-operator-admission-webhook/0.log" Jan 28 19:49:41 crc kubenswrapper[4721]: I0128 19:49:41.912019 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-bdm2v_ab955356-2884-4e1b-9dfc-966a662c4095/operator/0.log" Jan 28 19:49:41 crc kubenswrapper[4721]: I0128 19:49:41.927996 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-fqs7q_ed6d2e9b-bba7-4ef6-9f36-f9b77dd19117/perses-operator/0.log" Jan 28 19:49:49 crc kubenswrapper[4721]: I0128 19:49:49.541768 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:49:49 crc kubenswrapper[4721]: E0128 19:49:49.543622 4721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-76rx2_openshift-machine-config-operator(6e3427a4-9a03-4a08-bf7f-7a5e96290ad6)\"" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" Jan 28 19:50:01 crc kubenswrapper[4721]: I0128 19:50:01.487954 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/kube-rbac-proxy/0.log" Jan 28 19:50:01 crc kubenswrapper[4721]: I0128 19:50:01.598066 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bfcb79b6d-cd47c_8d99024b-2cf7-4372-98d3-2c282e9d7530/manager/0.log" Jan 28 19:50:02 crc kubenswrapper[4721]: I0128 19:50:02.528959 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:50:03 crc kubenswrapper[4721]: I0128 19:50:03.150351 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"942c91fbad4cbfe0e882942a1ca00cb036817ed7f05b74fa5efb425dde9643f6"} Jan 28 19:50:18 crc kubenswrapper[4721]: E0128 19:50:18.299002 4721 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.66:44000->38.102.83.66:37489: write tcp 38.102.83.66:44000->38.102.83.66:37489: write: broken pipe Jan 28 19:50:22 crc kubenswrapper[4721]: I0128 19:50:22.779004 4721 scope.go:117] "RemoveContainer" containerID="99eccd6daae0c40ef7f6a930d6ca38b6eb0370ade1af6e062ecff979e4629691" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.808344 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j9hvd"] Jan 28 19:50:42 crc kubenswrapper[4721]: E0128 19:50:42.809608 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="registry-server" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.809625 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="registry-server" Jan 28 19:50:42 crc kubenswrapper[4721]: E0128 19:50:42.809645 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="extract-content" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.809652 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="extract-content" Jan 28 19:50:42 crc kubenswrapper[4721]: E0128 19:50:42.809666 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="extract-utilities" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.809674 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="extract-utilities" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.809967 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc0adbf-87fe-46ea-8d48-499b0a2e8533" containerName="registry-server" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.812500 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.828890 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9hvd"] Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.907663 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-catalog-content\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.907736 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7x6c\" (UniqueName: \"kubernetes.io/projected/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-kube-api-access-z7x6c\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:42 crc kubenswrapper[4721]: I0128 19:50:42.907825 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-utilities\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.010192 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-catalog-content\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.010266 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7x6c\" (UniqueName: \"kubernetes.io/projected/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-kube-api-access-z7x6c\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.010298 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-utilities\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.011081 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-utilities\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.011127 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-catalog-content\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.030611 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7x6c\" (UniqueName: \"kubernetes.io/projected/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-kube-api-access-z7x6c\") pod \"redhat-marketplace-j9hvd\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.143399 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:43 crc kubenswrapper[4721]: I0128 19:50:43.688900 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9hvd"] Jan 28 19:50:44 crc kubenswrapper[4721]: I0128 19:50:44.613685 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerStarted","Data":"b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76"} Jan 28 19:50:44 crc kubenswrapper[4721]: I0128 19:50:44.614311 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerStarted","Data":"ef480c5c373e618a8bd3d225f738ce2f5e53d2743f1ae8272964eb5fabaa6858"} Jan 28 19:50:45 crc kubenswrapper[4721]: I0128 19:50:45.625683 4721 generic.go:334] "Generic (PLEG): container finished" podID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerID="b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76" exitCode=0 Jan 28 19:50:45 crc kubenswrapper[4721]: I0128 19:50:45.625757 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerDied","Data":"b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76"} Jan 28 19:50:46 crc kubenswrapper[4721]: I0128 19:50:46.637094 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerStarted","Data":"4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b"} Jan 28 19:50:47 crc kubenswrapper[4721]: I0128 19:50:47.648270 4721 generic.go:334] "Generic (PLEG): container finished" podID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerID="4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b" exitCode=0 Jan 28 19:50:47 crc kubenswrapper[4721]: I0128 19:50:47.648319 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerDied","Data":"4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b"} Jan 28 19:50:48 crc kubenswrapper[4721]: I0128 19:50:48.660860 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerStarted","Data":"6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208"} Jan 28 19:50:48 crc kubenswrapper[4721]: I0128 19:50:48.683834 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j9hvd" podStartSLOduration=4.290697015 podStartE2EDuration="6.683811491s" podCreationTimestamp="2026-01-28 19:50:42 +0000 UTC" firstStartedPulling="2026-01-28 19:50:45.628653858 +0000 UTC m=+4611.353959418" lastFinishedPulling="2026-01-28 19:50:48.021768334 +0000 UTC m=+4613.747073894" observedRunningTime="2026-01-28 19:50:48.681698295 +0000 UTC m=+4614.407003875" watchObservedRunningTime="2026-01-28 19:50:48.683811491 +0000 UTC m=+4614.409117071" Jan 28 19:50:53 crc kubenswrapper[4721]: I0128 19:50:53.145281 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:53 crc kubenswrapper[4721]: I0128 19:50:53.146315 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:53 crc kubenswrapper[4721]: I0128 19:50:53.198294 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:53 crc kubenswrapper[4721]: I0128 19:50:53.764979 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:53 crc kubenswrapper[4721]: I0128 19:50:53.826595 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9hvd"] Jan 28 19:50:55 crc kubenswrapper[4721]: I0128 19:50:55.733624 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j9hvd" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="registry-server" containerID="cri-o://6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208" gracePeriod=2 Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.391613 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.552227 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-catalog-content\") pod \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.552371 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-utilities\") pod \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.552530 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7x6c\" (UniqueName: \"kubernetes.io/projected/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-kube-api-access-z7x6c\") pod \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\" (UID: \"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a\") " Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.554003 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-utilities" (OuterVolumeSpecName: "utilities") pod "4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" (UID: "4fd1b685-0e5d-41ec-a454-c0226fa7eb8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.560325 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-kube-api-access-z7x6c" (OuterVolumeSpecName: "kube-api-access-z7x6c") pod "4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" (UID: "4fd1b685-0e5d-41ec-a454-c0226fa7eb8a"). InnerVolumeSpecName "kube-api-access-z7x6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.581666 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" (UID: "4fd1b685-0e5d-41ec-a454-c0226fa7eb8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.655586 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7x6c\" (UniqueName: \"kubernetes.io/projected/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-kube-api-access-z7x6c\") on node \"crc\" DevicePath \"\"" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.655640 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.655654 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.747318 4721 generic.go:334] "Generic (PLEG): container finished" podID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerID="6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208" exitCode=0 Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.747371 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerDied","Data":"6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208"} Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.747403 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9hvd" event={"ID":"4fd1b685-0e5d-41ec-a454-c0226fa7eb8a","Type":"ContainerDied","Data":"ef480c5c373e618a8bd3d225f738ce2f5e53d2743f1ae8272964eb5fabaa6858"} Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.747422 4721 scope.go:117] "RemoveContainer" containerID="6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.747457 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9hvd" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.775276 4721 scope.go:117] "RemoveContainer" containerID="4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.811558 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9hvd"] Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.826957 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9hvd"] Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.836226 4721 scope.go:117] "RemoveContainer" containerID="b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.869978 4721 scope.go:117] "RemoveContainer" containerID="6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208" Jan 28 19:50:56 crc kubenswrapper[4721]: E0128 19:50:56.870908 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208\": container with ID starting with 6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208 not found: ID does not exist" containerID="6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.870956 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208"} err="failed to get container status \"6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208\": rpc error: code = NotFound desc = could not find container \"6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208\": container with ID starting with 6a027a28eed7de8f4781545d408e45d05f78c87a175fd87fa3c2233db9965208 not found: ID does not exist" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.870997 4721 scope.go:117] "RemoveContainer" containerID="4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b" Jan 28 19:50:56 crc kubenswrapper[4721]: E0128 19:50:56.871439 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b\": container with ID starting with 4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b not found: ID does not exist" containerID="4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.871482 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b"} err="failed to get container status \"4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b\": rpc error: code = NotFound desc = could not find container \"4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b\": container with ID starting with 4c624e3c2dd13cc13b1c46cc4ec82153d9c6a28f0ffbfdef3fa376f7f9bab57b not found: ID does not exist" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.871515 4721 scope.go:117] "RemoveContainer" containerID="b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76" Jan 28 19:50:56 crc kubenswrapper[4721]: E0128 19:50:56.871742 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76\": container with ID starting with b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76 not found: ID does not exist" containerID="b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76" Jan 28 19:50:56 crc kubenswrapper[4721]: I0128 19:50:56.871769 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76"} err="failed to get container status \"b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76\": rpc error: code = NotFound desc = could not find container \"b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76\": container with ID starting with b5fa8d29c290cead24994c21d31cffb9ccbf03f8a7e5095c96f55de5b35f9c76 not found: ID does not exist" Jan 28 19:50:57 crc kubenswrapper[4721]: I0128 19:50:57.542213 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" path="/var/lib/kubelet/pods/4fd1b685-0e5d-41ec-a454-c0226fa7eb8a/volumes" Jan 28 19:51:23 crc kubenswrapper[4721]: I0128 19:51:23.447325 4721 scope.go:117] "RemoveContainer" containerID="460ab02faf9bbfba1bdcd77781963e9e460b3fd00435e7802459e469e5c85df2" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.694918 4721 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vmgs"] Jan 28 19:52:13 crc kubenswrapper[4721]: E0128 19:52:13.696151 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="extract-utilities" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.696192 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="extract-utilities" Jan 28 19:52:13 crc kubenswrapper[4721]: E0128 19:52:13.696210 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="registry-server" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.696218 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="registry-server" Jan 28 19:52:13 crc kubenswrapper[4721]: E0128 19:52:13.696277 4721 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="extract-content" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.696290 4721 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="extract-content" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.696548 4721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd1b685-0e5d-41ec-a454-c0226fa7eb8a" containerName="registry-server" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.698392 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.715124 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vmgs"] Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.853310 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-utilities\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.853761 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-catalog-content\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.854063 4721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68qkg\" (UniqueName: \"kubernetes.io/projected/14c22c18-25c6-42ac-be0f-a5c68b3ba943-kube-api-access-68qkg\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.955749 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-catalog-content\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.955837 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68qkg\" (UniqueName: \"kubernetes.io/projected/14c22c18-25c6-42ac-be0f-a5c68b3ba943-kube-api-access-68qkg\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.955926 4721 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-utilities\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.956268 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-catalog-content\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.956446 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-utilities\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:13 crc kubenswrapper[4721]: I0128 19:52:13.980416 4721 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68qkg\" (UniqueName: \"kubernetes.io/projected/14c22c18-25c6-42ac-be0f-a5c68b3ba943-kube-api-access-68qkg\") pod \"redhat-operators-8vmgs\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:14 crc kubenswrapper[4721]: I0128 19:52:14.020586 4721 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:14 crc kubenswrapper[4721]: I0128 19:52:14.553426 4721 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vmgs"] Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.583378 4721 generic.go:334] "Generic (PLEG): container finished" podID="cffa932a-996d-42ca-8f63-54e570ca5410" containerID="a60a204575db6c284186dcda04f19157b335532310754011a270c45a65ec1db8" exitCode=0 Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.583464 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" event={"ID":"cffa932a-996d-42ca-8f63-54e570ca5410","Type":"ContainerDied","Data":"a60a204575db6c284186dcda04f19157b335532310754011a270c45a65ec1db8"} Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.584895 4721 scope.go:117] "RemoveContainer" containerID="a60a204575db6c284186dcda04f19157b335532310754011a270c45a65ec1db8" Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.586230 4721 generic.go:334] "Generic (PLEG): container finished" podID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" containerID="b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076" exitCode=0 Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.586277 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerDied","Data":"b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076"} Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.586306 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerStarted","Data":"27b6ed7d330426e1474b712dff48ed3ca240bdf98184df494b58799ce4f066f1"} Jan 28 19:52:15 crc kubenswrapper[4721]: I0128 19:52:15.587767 4721 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:52:16 crc kubenswrapper[4721]: I0128 19:52:16.386517 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v5bj4_must-gather-cb9xb_cffa932a-996d-42ca-8f63-54e570ca5410/gather/0.log" Jan 28 19:52:16 crc kubenswrapper[4721]: I0128 19:52:16.598214 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerStarted","Data":"c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944"} Jan 28 19:52:22 crc kubenswrapper[4721]: I0128 19:52:22.656632 4721 generic.go:334] "Generic (PLEG): container finished" podID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" containerID="c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944" exitCode=0 Jan 28 19:52:22 crc kubenswrapper[4721]: I0128 19:52:22.656873 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerDied","Data":"c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944"} Jan 28 19:52:23 crc kubenswrapper[4721]: I0128 19:52:23.670132 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerStarted","Data":"0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32"} Jan 28 19:52:23 crc kubenswrapper[4721]: I0128 19:52:23.702511 4721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vmgs" podStartSLOduration=3.233911771 podStartE2EDuration="10.70248814s" podCreationTimestamp="2026-01-28 19:52:13 +0000 UTC" firstStartedPulling="2026-01-28 19:52:15.587526525 +0000 UTC m=+4701.312832085" lastFinishedPulling="2026-01-28 19:52:23.056102894 +0000 UTC m=+4708.781408454" observedRunningTime="2026-01-28 19:52:23.688106878 +0000 UTC m=+4709.413412448" watchObservedRunningTime="2026-01-28 19:52:23.70248814 +0000 UTC m=+4709.427793700" Jan 28 19:52:24 crc kubenswrapper[4721]: I0128 19:52:24.021837 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:24 crc kubenswrapper[4721]: I0128 19:52:24.021897 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:25 crc kubenswrapper[4721]: I0128 19:52:25.070481 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vmgs" podUID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" containerName="registry-server" probeResult="failure" output=< Jan 28 19:52:25 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:52:25 crc kubenswrapper[4721]: > Jan 28 19:52:31 crc kubenswrapper[4721]: I0128 19:52:31.225346 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:52:31 crc kubenswrapper[4721]: I0128 19:52:31.225879 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:52:32 crc kubenswrapper[4721]: I0128 19:52:32.434198 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v5bj4/must-gather-cb9xb"] Jan 28 19:52:32 crc kubenswrapper[4721]: I0128 19:52:32.435668 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" podUID="cffa932a-996d-42ca-8f63-54e570ca5410" containerName="copy" containerID="cri-o://440d57d2d6c173e59ef541ac66425624886f55defa73229a19a6369c6f97650b" gracePeriod=2 Jan 28 19:52:32 crc kubenswrapper[4721]: I0128 19:52:32.449312 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v5bj4/must-gather-cb9xb"] Jan 28 19:52:32 crc kubenswrapper[4721]: I0128 19:52:32.783023 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v5bj4_must-gather-cb9xb_cffa932a-996d-42ca-8f63-54e570ca5410/copy/0.log" Jan 28 19:52:32 crc kubenswrapper[4721]: I0128 19:52:32.783659 4721 generic.go:334] "Generic (PLEG): container finished" podID="cffa932a-996d-42ca-8f63-54e570ca5410" containerID="440d57d2d6c173e59ef541ac66425624886f55defa73229a19a6369c6f97650b" exitCode=143 Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.131780 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v5bj4_must-gather-cb9xb_cffa932a-996d-42ca-8f63-54e570ca5410/copy/0.log" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.132302 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.248633 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffa932a-996d-42ca-8f63-54e570ca5410-must-gather-output\") pod \"cffa932a-996d-42ca-8f63-54e570ca5410\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.248819 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl5m5\" (UniqueName: \"kubernetes.io/projected/cffa932a-996d-42ca-8f63-54e570ca5410-kube-api-access-bl5m5\") pod \"cffa932a-996d-42ca-8f63-54e570ca5410\" (UID: \"cffa932a-996d-42ca-8f63-54e570ca5410\") " Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.258454 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffa932a-996d-42ca-8f63-54e570ca5410-kube-api-access-bl5m5" (OuterVolumeSpecName: "kube-api-access-bl5m5") pod "cffa932a-996d-42ca-8f63-54e570ca5410" (UID: "cffa932a-996d-42ca-8f63-54e570ca5410"). InnerVolumeSpecName "kube-api-access-bl5m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.353293 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl5m5\" (UniqueName: \"kubernetes.io/projected/cffa932a-996d-42ca-8f63-54e570ca5410-kube-api-access-bl5m5\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.447149 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cffa932a-996d-42ca-8f63-54e570ca5410-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cffa932a-996d-42ca-8f63-54e570ca5410" (UID: "cffa932a-996d-42ca-8f63-54e570ca5410"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.460036 4721 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffa932a-996d-42ca-8f63-54e570ca5410-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.540834 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffa932a-996d-42ca-8f63-54e570ca5410" path="/var/lib/kubelet/pods/cffa932a-996d-42ca-8f63-54e570ca5410/volumes" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.795391 4721 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v5bj4_must-gather-cb9xb_cffa932a-996d-42ca-8f63-54e570ca5410/copy/0.log" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.797549 4721 scope.go:117] "RemoveContainer" containerID="440d57d2d6c173e59ef541ac66425624886f55defa73229a19a6369c6f97650b" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.797725 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v5bj4/must-gather-cb9xb" Jan 28 19:52:33 crc kubenswrapper[4721]: I0128 19:52:33.827872 4721 scope.go:117] "RemoveContainer" containerID="a60a204575db6c284186dcda04f19157b335532310754011a270c45a65ec1db8" Jan 28 19:52:35 crc kubenswrapper[4721]: I0128 19:52:35.102860 4721 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vmgs" podUID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" containerName="registry-server" probeResult="failure" output=< Jan 28 19:52:35 crc kubenswrapper[4721]: timeout: failed to connect service ":50051" within 1s Jan 28 19:52:35 crc kubenswrapper[4721]: > Jan 28 19:52:44 crc kubenswrapper[4721]: I0128 19:52:44.076282 4721 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:44 crc kubenswrapper[4721]: I0128 19:52:44.132669 4721 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:44 crc kubenswrapper[4721]: I0128 19:52:44.895803 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vmgs"] Jan 28 19:52:45 crc kubenswrapper[4721]: I0128 19:52:45.930966 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vmgs" podUID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" containerName="registry-server" containerID="cri-o://0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32" gracePeriod=2 Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.665801 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.791666 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-utilities\") pod \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.791779 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68qkg\" (UniqueName: \"kubernetes.io/projected/14c22c18-25c6-42ac-be0f-a5c68b3ba943-kube-api-access-68qkg\") pod \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.791900 4721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-catalog-content\") pod \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\" (UID: \"14c22c18-25c6-42ac-be0f-a5c68b3ba943\") " Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.792882 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-utilities" (OuterVolumeSpecName: "utilities") pod "14c22c18-25c6-42ac-be0f-a5c68b3ba943" (UID: "14c22c18-25c6-42ac-be0f-a5c68b3ba943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.815869 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c22c18-25c6-42ac-be0f-a5c68b3ba943-kube-api-access-68qkg" (OuterVolumeSpecName: "kube-api-access-68qkg") pod "14c22c18-25c6-42ac-be0f-a5c68b3ba943" (UID: "14c22c18-25c6-42ac-be0f-a5c68b3ba943"). InnerVolumeSpecName "kube-api-access-68qkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.894806 4721 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.894843 4721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68qkg\" (UniqueName: \"kubernetes.io/projected/14c22c18-25c6-42ac-be0f-a5c68b3ba943-kube-api-access-68qkg\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.945219 4721 generic.go:334] "Generic (PLEG): container finished" podID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" containerID="0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32" exitCode=0 Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.945279 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerDied","Data":"0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32"} Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.945304 4721 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vmgs" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.945318 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vmgs" event={"ID":"14c22c18-25c6-42ac-be0f-a5c68b3ba943","Type":"ContainerDied","Data":"27b6ed7d330426e1474b712dff48ed3ca240bdf98184df494b58799ce4f066f1"} Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.945348 4721 scope.go:117] "RemoveContainer" containerID="0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.958033 4721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c22c18-25c6-42ac-be0f-a5c68b3ba943" (UID: "14c22c18-25c6-42ac-be0f-a5c68b3ba943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.969332 4721 scope.go:117] "RemoveContainer" containerID="c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.992662 4721 scope.go:117] "RemoveContainer" containerID="b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076" Jan 28 19:52:46 crc kubenswrapper[4721]: I0128 19:52:46.997010 4721 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c22c18-25c6-42ac-be0f-a5c68b3ba943-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.052336 4721 scope.go:117] "RemoveContainer" containerID="0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32" Jan 28 19:52:47 crc kubenswrapper[4721]: E0128 19:52:47.052920 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32\": container with ID starting with 0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32 not found: ID does not exist" containerID="0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.052975 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32"} err="failed to get container status \"0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32\": rpc error: code = NotFound desc = could not find container \"0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32\": container with ID starting with 0a96e4fd20210e5ad616f9667baf88de72fd8773740c611b86ca53b13b62cf32 not found: ID does not exist" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.053017 4721 scope.go:117] "RemoveContainer" containerID="c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944" Jan 28 19:52:47 crc kubenswrapper[4721]: E0128 19:52:47.053632 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944\": container with ID starting with c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944 not found: ID does not exist" containerID="c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.053674 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944"} err="failed to get container status \"c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944\": rpc error: code = NotFound desc = could not find container \"c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944\": container with ID starting with c9a2b7caa3544acaeacbaab2428668bfbd9b06ba09edba29aca62d8589106944 not found: ID does not exist" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.053704 4721 scope.go:117] "RemoveContainer" containerID="b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076" Jan 28 19:52:47 crc kubenswrapper[4721]: E0128 19:52:47.053973 4721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076\": container with ID starting with b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076 not found: ID does not exist" containerID="b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.054001 4721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076"} err="failed to get container status \"b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076\": rpc error: code = NotFound desc = could not find container \"b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076\": container with ID starting with b983b627c88a979a922da50526b6ee97b44b153f54ba968ba51e2e36ffd76076 not found: ID does not exist" Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.301612 4721 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vmgs"] Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.312814 4721 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vmgs"] Jan 28 19:52:47 crc kubenswrapper[4721]: I0128 19:52:47.543670 4721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c22c18-25c6-42ac-be0f-a5c68b3ba943" path="/var/lib/kubelet/pods/14c22c18-25c6-42ac-be0f-a5c68b3ba943/volumes" Jan 28 19:53:01 crc kubenswrapper[4721]: I0128 19:53:01.224955 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:53:01 crc kubenswrapper[4721]: I0128 19:53:01.225595 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.225214 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.225767 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.225820 4721 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.226722 4721 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"942c91fbad4cbfe0e882942a1ca00cb036817ed7f05b74fa5efb425dde9643f6"} pod="openshift-machine-config-operator/machine-config-daemon-76rx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.226777 4721 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" containerID="cri-o://942c91fbad4cbfe0e882942a1ca00cb036817ed7f05b74fa5efb425dde9643f6" gracePeriod=600 Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.431218 4721 generic.go:334] "Generic (PLEG): container finished" podID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerID="942c91fbad4cbfe0e882942a1ca00cb036817ed7f05b74fa5efb425dde9643f6" exitCode=0 Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.431314 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerDied","Data":"942c91fbad4cbfe0e882942a1ca00cb036817ed7f05b74fa5efb425dde9643f6"} Jan 28 19:53:31 crc kubenswrapper[4721]: I0128 19:53:31.431675 4721 scope.go:117] "RemoveContainer" containerID="e25f338d1e1f7e9538daec152b64af6fcba9ff0b91b9ae135d4931beae2a0f97" Jan 28 19:53:32 crc kubenswrapper[4721]: I0128 19:53:32.442601 4721 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" event={"ID":"6e3427a4-9a03-4a08-bf7f-7a5e96290ad6","Type":"ContainerStarted","Data":"cc43b26c8d034b32dd1c6fc2fdc1e33127f0970d6339f78323ae7365ae06d251"} Jan 28 19:55:31 crc kubenswrapper[4721]: I0128 19:55:31.224791 4721 patch_prober.go:28] interesting pod/machine-config-daemon-76rx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:55:31 crc kubenswrapper[4721]: I0128 19:55:31.225364 4721 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-76rx2" podUID="6e3427a4-9a03-4a08-bf7f-7a5e96290ad6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"